Nov 24 21:02:33 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 24 21:02:33 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 24 21:02:33 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 21:02:33 localhost kernel: BIOS-provided physical RAM map:
Nov 24 21:02:33 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 24 21:02:33 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 24 21:02:33 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 24 21:02:33 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 24 21:02:33 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 24 21:02:33 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 24 21:02:33 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 24 21:02:33 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 24 21:02:33 localhost kernel: NX (Execute Disable) protection: active
Nov 24 21:02:33 localhost kernel: APIC: Static calls initialized
Nov 24 21:02:33 localhost kernel: SMBIOS 2.8 present.
Nov 24 21:02:33 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 24 21:02:33 localhost kernel: Hypervisor detected: KVM
Nov 24 21:02:33 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 24 21:02:33 localhost kernel: kvm-clock: using sched offset of 4291483630 cycles
Nov 24 21:02:33 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 24 21:02:33 localhost kernel: tsc: Detected 2800.000 MHz processor
Nov 24 21:02:33 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 24 21:02:33 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 24 21:02:33 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 24 21:02:33 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 24 21:02:33 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 24 21:02:33 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 24 21:02:33 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 24 21:02:33 localhost kernel: Using GB pages for direct mapping
Nov 24 21:02:33 localhost kernel: RAMDISK: [mem 0x2ed25000-0x3368afff]
Nov 24 21:02:33 localhost kernel: ACPI: Early table checksum verification disabled
Nov 24 21:02:33 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 24 21:02:33 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 21:02:33 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 21:02:33 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 21:02:33 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 24 21:02:33 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 21:02:33 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 21:02:33 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 24 21:02:33 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 24 21:02:33 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 24 21:02:33 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 24 21:02:33 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 24 21:02:33 localhost kernel: No NUMA configuration found
Nov 24 21:02:33 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 24 21:02:33 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 24 21:02:33 localhost kernel: crashkernel reserved: 0x00000000a6000000 - 0x00000000b6000000 (256 MB)
Nov 24 21:02:33 localhost kernel: Zone ranges:
Nov 24 21:02:33 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 24 21:02:33 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 24 21:02:33 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 21:02:33 localhost kernel:   Device   empty
Nov 24 21:02:33 localhost kernel: Movable zone start for each node
Nov 24 21:02:33 localhost kernel: Early memory node ranges
Nov 24 21:02:33 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 24 21:02:33 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 24 21:02:33 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 21:02:33 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 24 21:02:33 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 24 21:02:33 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 24 21:02:33 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 24 21:02:33 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 24 21:02:33 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 24 21:02:33 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 24 21:02:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 24 21:02:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 24 21:02:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 24 21:02:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 24 21:02:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 24 21:02:33 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 24 21:02:33 localhost kernel: TSC deadline timer available
Nov 24 21:02:33 localhost kernel: CPU topo: Max. logical packages:   8
Nov 24 21:02:33 localhost kernel: CPU topo: Max. logical dies:       8
Nov 24 21:02:33 localhost kernel: CPU topo: Max. dies per package:   1
Nov 24 21:02:33 localhost kernel: CPU topo: Max. threads per core:   1
Nov 24 21:02:33 localhost kernel: CPU topo: Num. cores per package:     1
Nov 24 21:02:33 localhost kernel: CPU topo: Num. threads per package:   1
Nov 24 21:02:33 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 24 21:02:33 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 24 21:02:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 24 21:02:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 24 21:02:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 24 21:02:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 24 21:02:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 24 21:02:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 24 21:02:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 24 21:02:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 24 21:02:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 24 21:02:33 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 24 21:02:33 localhost kernel: Booting paravirtualized kernel on KVM
Nov 24 21:02:33 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 24 21:02:33 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 24 21:02:33 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 24 21:02:33 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 24 21:02:33 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 24 21:02:33 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 24 21:02:33 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 21:02:33 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 24 21:02:33 localhost kernel: random: crng init done
Nov 24 21:02:33 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 24 21:02:33 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 24 21:02:33 localhost kernel: Fallback order for Node 0: 0 
Nov 24 21:02:33 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 24 21:02:33 localhost kernel: Policy zone: Normal
Nov 24 21:02:33 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 24 21:02:33 localhost kernel: software IO TLB: area num 8.
Nov 24 21:02:33 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 24 21:02:33 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 24 21:02:33 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 24 21:02:33 localhost kernel: Dynamic Preempt: voluntary
Nov 24 21:02:33 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 24 21:02:33 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 24 21:02:33 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 24 21:02:33 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 24 21:02:33 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 24 21:02:33 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 24 21:02:33 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 24 21:02:33 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 24 21:02:33 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 21:02:33 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 21:02:33 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 21:02:33 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 24 21:02:33 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 24 21:02:33 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 24 21:02:33 localhost kernel: Console: colour VGA+ 80x25
Nov 24 21:02:33 localhost kernel: printk: console [ttyS0] enabled
Nov 24 21:02:33 localhost kernel: ACPI: Core revision 20230331
Nov 24 21:02:33 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 24 21:02:33 localhost kernel: x2apic enabled
Nov 24 21:02:33 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 24 21:02:33 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 24 21:02:33 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Nov 24 21:02:33 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 24 21:02:33 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 24 21:02:33 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 24 21:02:33 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 24 21:02:33 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 24 21:02:33 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 24 21:02:33 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 24 21:02:33 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 24 21:02:33 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 24 21:02:33 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 24 21:02:33 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 24 21:02:33 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 24 21:02:33 localhost kernel: x86/bugs: return thunk changed
Nov 24 21:02:33 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 24 21:02:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 24 21:02:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 24 21:02:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 24 21:02:33 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 24 21:02:33 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 24 21:02:33 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 24 21:02:33 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 24 21:02:33 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 24 21:02:33 localhost kernel: landlock: Up and running.
Nov 24 21:02:33 localhost kernel: Yama: becoming mindful.
Nov 24 21:02:33 localhost kernel: SELinux:  Initializing.
Nov 24 21:02:33 localhost kernel: LSM support for eBPF active
Nov 24 21:02:33 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 21:02:33 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 21:02:33 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 24 21:02:33 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 24 21:02:33 localhost kernel: ... version:                0
Nov 24 21:02:33 localhost kernel: ... bit width:              48
Nov 24 21:02:33 localhost kernel: ... generic registers:      6
Nov 24 21:02:33 localhost kernel: ... value mask:             0000ffffffffffff
Nov 24 21:02:33 localhost kernel: ... max period:             00007fffffffffff
Nov 24 21:02:33 localhost kernel: ... fixed-purpose events:   0
Nov 24 21:02:33 localhost kernel: ... event mask:             000000000000003f
Nov 24 21:02:33 localhost kernel: signal: max sigframe size: 1776
Nov 24 21:02:33 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 24 21:02:33 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 24 21:02:33 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 24 21:02:33 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 24 21:02:33 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 24 21:02:33 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 24 21:02:33 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Nov 24 21:02:33 localhost kernel: node 0 deferred pages initialised in 9ms
Nov 24 21:02:33 localhost kernel: Memory: 7776576K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 605572K reserved, 0K cma-reserved)
Nov 24 21:02:33 localhost kernel: devtmpfs: initialized
Nov 24 21:02:33 localhost kernel: x86/mm: Memory block size: 128MB
Nov 24 21:02:33 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 24 21:02:33 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 24 21:02:33 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 24 21:02:33 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 24 21:02:33 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 24 21:02:33 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 24 21:02:33 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 24 21:02:33 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 24 21:02:33 localhost kernel: audit: type=2000 audit(1764018152.220:1): state=initialized audit_enabled=0 res=1
Nov 24 21:02:33 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 24 21:02:33 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 24 21:02:33 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 24 21:02:33 localhost kernel: cpuidle: using governor menu
Nov 24 21:02:33 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 24 21:02:33 localhost kernel: PCI: Using configuration type 1 for base access
Nov 24 21:02:33 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 24 21:02:33 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 24 21:02:33 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 24 21:02:33 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 24 21:02:33 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 24 21:02:33 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 24 21:02:33 localhost kernel: Demotion targets for Node 0: null
Nov 24 21:02:33 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 24 21:02:33 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 24 21:02:33 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 24 21:02:33 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 24 21:02:33 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 24 21:02:33 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 24 21:02:33 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 24 21:02:33 localhost kernel: ACPI: Interpreter enabled
Nov 24 21:02:33 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 24 21:02:33 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 24 21:02:33 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 24 21:02:33 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 24 21:02:33 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 24 21:02:33 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 24 21:02:33 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [3] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [4] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [5] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [6] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [7] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [8] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [9] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [10] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [11] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [12] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [13] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [14] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [15] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [16] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [17] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [18] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [19] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [20] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [21] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [22] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [23] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [24] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [25] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [26] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [27] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [28] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [29] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [30] registered
Nov 24 21:02:33 localhost kernel: acpiphp: Slot [31] registered
Nov 24 21:02:33 localhost kernel: PCI host bridge to bus 0000:00
Nov 24 21:02:33 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 24 21:02:33 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 24 21:02:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 24 21:02:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 24 21:02:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 24 21:02:33 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 24 21:02:33 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 24 21:02:33 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 24 21:02:33 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 24 21:02:33 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 24 21:02:33 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 24 21:02:33 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 24 21:02:33 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 24 21:02:33 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 21:02:33 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 24 21:02:33 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 24 21:02:33 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 24 21:02:33 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 24 21:02:33 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 24 21:02:33 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 24 21:02:33 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 24 21:02:33 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 24 21:02:33 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 21:02:33 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 24 21:02:33 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 24 21:02:33 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 21:02:33 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 24 21:02:33 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 24 21:02:33 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 24 21:02:33 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 24 21:02:33 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 24 21:02:33 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 24 21:02:33 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 24 21:02:33 localhost kernel: iommu: Default domain type: Translated
Nov 24 21:02:33 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 24 21:02:33 localhost kernel: SCSI subsystem initialized
Nov 24 21:02:33 localhost kernel: ACPI: bus type USB registered
Nov 24 21:02:33 localhost kernel: usbcore: registered new interface driver usbfs
Nov 24 21:02:33 localhost kernel: usbcore: registered new interface driver hub
Nov 24 21:02:33 localhost kernel: usbcore: registered new device driver usb
Nov 24 21:02:33 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 24 21:02:33 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 24 21:02:33 localhost kernel: PTP clock support registered
Nov 24 21:02:33 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 24 21:02:33 localhost kernel: NetLabel: Initializing
Nov 24 21:02:33 localhost kernel: NetLabel:  domain hash size = 128
Nov 24 21:02:33 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 24 21:02:33 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 24 21:02:33 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 24 21:02:33 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 24 21:02:33 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 24 21:02:33 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 24 21:02:33 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 24 21:02:33 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 24 21:02:33 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 24 21:02:33 localhost kernel: vgaarb: loaded
Nov 24 21:02:33 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 24 21:02:33 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 24 21:02:33 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 24 21:02:33 localhost kernel: pnp: PnP ACPI init
Nov 24 21:02:33 localhost kernel: pnp 00:03: [dma 2]
Nov 24 21:02:33 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 24 21:02:33 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 24 21:02:33 localhost kernel: NET: Registered PF_INET protocol family
Nov 24 21:02:33 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 24 21:02:33 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 24 21:02:33 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 24 21:02:33 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 24 21:02:33 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 24 21:02:33 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 24 21:02:33 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 24 21:02:33 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 21:02:33 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 21:02:33 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 24 21:02:33 localhost kernel: NET: Registered PF_XDP protocol family
Nov 24 21:02:33 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 24 21:02:33 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 24 21:02:33 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 24 21:02:33 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 24 21:02:33 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 24 21:02:33 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 24 21:02:33 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 24 21:02:33 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 72659 usecs
Nov 24 21:02:33 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 24 21:02:33 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 24 21:02:33 localhost kernel: software IO TLB: mapped [mem 0x00000000bbfdb000-0x00000000bffdb000] (64MB)
Nov 24 21:02:33 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 24 21:02:33 localhost kernel: ACPI: bus type thunderbolt registered
Nov 24 21:02:33 localhost kernel: Initialise system trusted keyrings
Nov 24 21:02:33 localhost kernel: Key type blacklist registered
Nov 24 21:02:33 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 24 21:02:33 localhost kernel: zbud: loaded
Nov 24 21:02:33 localhost kernel: integrity: Platform Keyring initialized
Nov 24 21:02:33 localhost kernel: integrity: Machine keyring initialized
Nov 24 21:02:33 localhost kernel: Freeing initrd memory: 75160K
Nov 24 21:02:33 localhost kernel: NET: Registered PF_ALG protocol family
Nov 24 21:02:33 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 24 21:02:33 localhost kernel: Key type asymmetric registered
Nov 24 21:02:33 localhost kernel: Asymmetric key parser 'x509' registered
Nov 24 21:02:33 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 24 21:02:33 localhost kernel: io scheduler mq-deadline registered
Nov 24 21:02:33 localhost kernel: io scheduler kyber registered
Nov 24 21:02:33 localhost kernel: io scheduler bfq registered
Nov 24 21:02:33 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 24 21:02:33 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 24 21:02:33 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 24 21:02:33 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 24 21:02:33 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 24 21:02:33 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 24 21:02:33 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 24 21:02:33 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 24 21:02:33 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 24 21:02:33 localhost kernel: Non-volatile memory driver v1.3
Nov 24 21:02:33 localhost kernel: rdac: device handler registered
Nov 24 21:02:33 localhost kernel: hp_sw: device handler registered
Nov 24 21:02:33 localhost kernel: emc: device handler registered
Nov 24 21:02:33 localhost kernel: alua: device handler registered
Nov 24 21:02:33 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 24 21:02:33 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 24 21:02:33 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 24 21:02:33 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 24 21:02:33 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 24 21:02:33 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 24 21:02:33 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 24 21:02:33 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 24 21:02:33 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 24 21:02:33 localhost kernel: hub 1-0:1.0: USB hub found
Nov 24 21:02:33 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 24 21:02:33 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 24 21:02:33 localhost kernel: usbserial: USB Serial support registered for generic
Nov 24 21:02:33 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 24 21:02:33 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 24 21:02:33 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 24 21:02:33 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 24 21:02:33 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 24 21:02:33 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 24 21:02:33 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 24 21:02:33 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 24 21:02:33 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-24T21:02:32 UTC (1764018152)
Nov 24 21:02:33 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 24 21:02:33 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 24 21:02:33 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 24 21:02:33 localhost kernel: usbcore: registered new interface driver usbhid
Nov 24 21:02:33 localhost kernel: usbhid: USB HID core driver
Nov 24 21:02:33 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 24 21:02:33 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 24 21:02:33 localhost kernel: Initializing XFRM netlink socket
Nov 24 21:02:33 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 24 21:02:33 localhost kernel: Segment Routing with IPv6
Nov 24 21:02:33 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 24 21:02:33 localhost kernel: mpls_gso: MPLS GSO support
Nov 24 21:02:33 localhost kernel: IPI shorthand broadcast: enabled
Nov 24 21:02:33 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 24 21:02:33 localhost kernel: AES CTR mode by8 optimization enabled
Nov 24 21:02:33 localhost kernel: sched_clock: Marking stable (1214008932, 140803379)->(1430902198, -76089887)
Nov 24 21:02:33 localhost kernel: registered taskstats version 1
Nov 24 21:02:33 localhost kernel: Loading compiled-in X.509 certificates
Nov 24 21:02:33 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 24 21:02:33 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 24 21:02:33 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 24 21:02:33 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 24 21:02:33 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 24 21:02:33 localhost kernel: Demotion targets for Node 0: null
Nov 24 21:02:33 localhost kernel: page_owner is disabled
Nov 24 21:02:33 localhost kernel: Key type .fscrypt registered
Nov 24 21:02:33 localhost kernel: Key type fscrypt-provisioning registered
Nov 24 21:02:33 localhost kernel: Key type big_key registered
Nov 24 21:02:33 localhost kernel: Key type encrypted registered
Nov 24 21:02:33 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 24 21:02:33 localhost kernel: Loading compiled-in module X.509 certificates
Nov 24 21:02:33 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 24 21:02:33 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 24 21:02:33 localhost kernel: ima: No architecture policies found
Nov 24 21:02:33 localhost kernel: evm: Initialising EVM extended attributes:
Nov 24 21:02:33 localhost kernel: evm: security.selinux
Nov 24 21:02:33 localhost kernel: evm: security.SMACK64 (disabled)
Nov 24 21:02:33 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 24 21:02:33 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 24 21:02:33 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 24 21:02:33 localhost kernel: evm: security.apparmor (disabled)
Nov 24 21:02:33 localhost kernel: evm: security.ima
Nov 24 21:02:33 localhost kernel: evm: security.capability
Nov 24 21:02:33 localhost kernel: evm: HMAC attrs: 0x1
Nov 24 21:02:33 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 24 21:02:33 localhost kernel: Running certificate verification RSA selftest
Nov 24 21:02:33 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 24 21:02:33 localhost kernel: Running certificate verification ECDSA selftest
Nov 24 21:02:33 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 24 21:02:33 localhost kernel: clk: Disabling unused clocks
Nov 24 21:02:33 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 24 21:02:33 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 24 21:02:33 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 24 21:02:33 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 24 21:02:33 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 24 21:02:33 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 24 21:02:33 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 24 21:02:33 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 24 21:02:33 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 24 21:02:33 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 24 21:02:33 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 24 21:02:33 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 24 21:02:33 localhost kernel: Run /init as init process
Nov 24 21:02:33 localhost kernel:   with arguments:
Nov 24 21:02:33 localhost kernel:     /init
Nov 24 21:02:33 localhost kernel:   with environment:
Nov 24 21:02:33 localhost kernel:     HOME=/
Nov 24 21:02:33 localhost kernel:     TERM=linux
Nov 24 21:02:33 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 24 21:02:33 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 21:02:33 localhost systemd[1]: Detected virtualization kvm.
Nov 24 21:02:33 localhost systemd[1]: Detected architecture x86-64.
Nov 24 21:02:33 localhost systemd[1]: Running in initrd.
Nov 24 21:02:33 localhost systemd[1]: No hostname configured, using default hostname.
Nov 24 21:02:33 localhost systemd[1]: Hostname set to <localhost>.
Nov 24 21:02:33 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 24 21:02:33 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 24 21:02:33 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 21:02:33 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 24 21:02:33 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 24 21:02:33 localhost systemd[1]: Reached target Local File Systems.
Nov 24 21:02:33 localhost systemd[1]: Reached target Path Units.
Nov 24 21:02:33 localhost systemd[1]: Reached target Slice Units.
Nov 24 21:02:33 localhost systemd[1]: Reached target Swaps.
Nov 24 21:02:33 localhost systemd[1]: Reached target Timer Units.
Nov 24 21:02:33 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 24 21:02:33 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 24 21:02:33 localhost systemd[1]: Listening on Journal Socket.
Nov 24 21:02:33 localhost systemd[1]: Listening on udev Control Socket.
Nov 24 21:02:33 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 24 21:02:33 localhost systemd[1]: Reached target Socket Units.
Nov 24 21:02:33 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 24 21:02:33 localhost systemd[1]: Starting Journal Service...
Nov 24 21:02:33 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 21:02:33 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 24 21:02:33 localhost systemd[1]: Starting Create System Users...
Nov 24 21:02:33 localhost systemd[1]: Starting Setup Virtual Console...
Nov 24 21:02:33 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 24 21:02:33 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 24 21:02:33 localhost systemd-journald[309]: Journal started
Nov 24 21:02:33 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/c15acc49e00e4e10af5a4da075840387) is 8.0M, max 153.6M, 145.6M free.
Nov 24 21:02:33 localhost systemd[1]: Started Journal Service.
Nov 24 21:02:33 localhost systemd-sysusers[313]: Creating group 'users' with GID 100.
Nov 24 21:02:33 localhost systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Nov 24 21:02:33 localhost systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 24 21:02:33 localhost systemd[1]: Finished Create System Users.
Nov 24 21:02:33 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 24 21:02:33 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 21:02:33 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 21:02:33 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 21:02:33 localhost systemd[1]: Finished Setup Virtual Console.
Nov 24 21:02:33 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 24 21:02:33 localhost systemd[1]: Starting dracut cmdline hook...
Nov 24 21:02:33 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Nov 24 21:02:33 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 21:02:33 localhost systemd[1]: Finished dracut cmdline hook.
Nov 24 21:02:33 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 24 21:02:33 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 24 21:02:33 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 24 21:02:33 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 24 21:02:33 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 24 21:02:33 localhost kernel: RPC: Registered udp transport module.
Nov 24 21:02:33 localhost kernel: RPC: Registered tcp transport module.
Nov 24 21:02:33 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 24 21:02:33 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 24 21:02:33 localhost rpc.statd[443]: Version 2.5.4 starting
Nov 24 21:02:33 localhost rpc.statd[443]: Initializing NSM state
Nov 24 21:02:33 localhost rpc.idmapd[448]: Setting log level to 0
Nov 24 21:02:33 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 24 21:02:33 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 21:02:33 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 21:02:33 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 21:02:33 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 24 21:02:33 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 24 21:02:33 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 24 21:02:34 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 24 21:02:34 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 21:02:34 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 24 21:02:34 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 21:02:34 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 21:02:34 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 21:02:34 localhost systemd[1]: Reached target Network.
Nov 24 21:02:34 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 21:02:34 localhost systemd[1]: Starting dracut initqueue hook...
Nov 24 21:02:34 localhost kernel: libata version 3.00 loaded.
Nov 24 21:02:34 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 24 21:02:34 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 24 21:02:34 localhost kernel: scsi host0: ata_piix
Nov 24 21:02:34 localhost kernel: scsi host1: ata_piix
Nov 24 21:02:34 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 24 21:02:34 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 24 21:02:34 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 24 21:02:34 localhost kernel:  vda: vda1
Nov 24 21:02:34 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 24 21:02:34 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 24 21:02:34 localhost systemd[1]: Reached target System Initialization.
Nov 24 21:02:34 localhost systemd[1]: Reached target Basic System.
Nov 24 21:02:34 localhost kernel: ata1: found unknown device (class 0)
Nov 24 21:02:34 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 24 21:02:34 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 24 21:02:34 localhost systemd-udevd[495]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 21:02:34 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 24 21:02:34 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 24 21:02:34 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 24 21:02:34 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 21:02:34 localhost systemd[1]: Reached target Initrd Root Device.
Nov 24 21:02:34 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 24 21:02:34 localhost systemd[1]: Finished dracut initqueue hook.
Nov 24 21:02:34 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 21:02:34 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 24 21:02:34 localhost systemd[1]: Reached target Remote File Systems.
Nov 24 21:02:34 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 24 21:02:34 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 24 21:02:34 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 24 21:02:34 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Nov 24 21:02:34 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 21:02:34 localhost systemd[1]: Mounting /sysroot...
Nov 24 21:02:35 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 24 21:02:35 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 24 21:02:35 localhost kernel: XFS (vda1): Ending clean mount
Nov 24 21:02:35 localhost systemd[1]: Mounted /sysroot.
Nov 24 21:02:35 localhost systemd[1]: Reached target Initrd Root File System.
Nov 24 21:02:35 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 24 21:02:35 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 24 21:02:35 localhost systemd[1]: Reached target Initrd File Systems.
Nov 24 21:02:35 localhost systemd[1]: Reached target Initrd Default Target.
Nov 24 21:02:35 localhost systemd[1]: Starting dracut mount hook...
Nov 24 21:02:35 localhost systemd[1]: Finished dracut mount hook.
Nov 24 21:02:35 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 24 21:02:35 localhost rpc.idmapd[448]: exiting on signal 15
Nov 24 21:02:35 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 24 21:02:35 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 24 21:02:35 localhost systemd[1]: Stopped target Network.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Timer Units.
Nov 24 21:02:35 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 24 21:02:35 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Basic System.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Path Units.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Remote File Systems.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Slice Units.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Socket Units.
Nov 24 21:02:35 localhost systemd[1]: Stopped target System Initialization.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Local File Systems.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Swaps.
Nov 24 21:02:35 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped dracut mount hook.
Nov 24 21:02:35 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 24 21:02:35 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 24 21:02:35 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 24 21:02:35 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 24 21:02:35 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 24 21:02:35 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 24 21:02:35 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 24 21:02:35 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 24 21:02:35 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 24 21:02:35 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 24 21:02:35 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 24 21:02:35 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 24 21:02:35 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Closed udev Control Socket.
Nov 24 21:02:35 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Closed udev Kernel Socket.
Nov 24 21:02:35 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 24 21:02:35 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 24 21:02:35 localhost systemd[1]: Starting Cleanup udev Database...
Nov 24 21:02:35 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 24 21:02:35 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 24 21:02:35 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Stopped Create System Users.
Nov 24 21:02:35 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 24 21:02:35 localhost systemd[1]: Finished Cleanup udev Database.
Nov 24 21:02:35 localhost systemd[1]: Reached target Switch Root.
Nov 24 21:02:35 localhost systemd[1]: Starting Switch Root...
Nov 24 21:02:35 localhost systemd[1]: Switching root.
Nov 24 21:02:35 localhost systemd-journald[309]: Journal stopped
Nov 24 21:02:36 localhost systemd-journald[309]: Received SIGTERM from PID 1 (systemd).
Nov 24 21:02:36 localhost kernel: audit: type=1404 audit(1764018155.862:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 24 21:02:36 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 21:02:36 localhost kernel: SELinux:  policy capability open_perms=1
Nov 24 21:02:36 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 21:02:36 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 24 21:02:36 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 21:02:36 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 21:02:36 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 21:02:36 localhost kernel: audit: type=1403 audit(1764018156.075:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 24 21:02:36 localhost systemd[1]: Successfully loaded SELinux policy in 220.274ms.
Nov 24 21:02:36 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 50.723ms.
Nov 24 21:02:36 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 21:02:36 localhost systemd[1]: Detected virtualization kvm.
Nov 24 21:02:36 localhost systemd[1]: Detected architecture x86-64.
Nov 24 21:02:36 localhost systemd-rc-local-generator[639]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:02:36 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 24 21:02:36 localhost systemd[1]: Stopped Switch Root.
Nov 24 21:02:36 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 24 21:02:36 localhost systemd[1]: Created slice Slice /system/getty.
Nov 24 21:02:36 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 24 21:02:36 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 24 21:02:36 localhost systemd[1]: Created slice User and Session Slice.
Nov 24 21:02:36 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 21:02:36 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 24 21:02:36 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 24 21:02:36 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 24 21:02:36 localhost systemd[1]: Stopped target Switch Root.
Nov 24 21:02:36 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 24 21:02:36 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 24 21:02:36 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 24 21:02:36 localhost systemd[1]: Reached target Path Units.
Nov 24 21:02:36 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 24 21:02:36 localhost systemd[1]: Reached target Slice Units.
Nov 24 21:02:36 localhost systemd[1]: Reached target Swaps.
Nov 24 21:02:36 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 24 21:02:36 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 24 21:02:36 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 24 21:02:36 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 24 21:02:36 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 24 21:02:36 localhost systemd[1]: Listening on udev Control Socket.
Nov 24 21:02:36 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 24 21:02:36 localhost systemd[1]: Mounting Huge Pages File System...
Nov 24 21:02:36 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 24 21:02:36 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 24 21:02:36 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 24 21:02:36 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 21:02:36 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 24 21:02:36 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 21:02:36 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 24 21:02:36 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 24 21:02:36 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 24 21:02:36 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 24 21:02:36 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 24 21:02:36 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 24 21:02:36 localhost systemd[1]: Stopped Journal Service.
Nov 24 21:02:36 localhost systemd[1]: Starting Journal Service...
Nov 24 21:02:36 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 21:02:36 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 24 21:02:36 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 21:02:36 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 24 21:02:36 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 24 21:02:36 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 24 21:02:36 localhost kernel: fuse: init (API version 7.37)
Nov 24 21:02:36 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 24 21:02:36 localhost systemd-journald[680]: Journal started
Nov 24 21:02:36 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 21:02:36 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 24 21:02:36 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 24 21:02:36 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:36 localhost systemd[1]: Started Journal Service.
Nov 24 21:02:36 localhost systemd[1]: Mounted Huge Pages File System.
Nov 24 21:02:36 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 24 21:02:36 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 24 21:02:36 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 24 21:02:36 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 24 21:02:36 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 21:02:36 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 21:02:36 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 24 21:02:36 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 24 21:02:36 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 24 21:02:36 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 24 21:02:36 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 24 21:02:36 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 24 21:02:36 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 24 21:02:36 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 24 21:02:36 localhost kernel: ACPI: bus type drm_connector registered
Nov 24 21:02:36 localhost systemd[1]: Mounting FUSE Control File System...
Nov 24 21:02:36 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 21:02:36 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 24 21:02:36 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 24 21:02:36 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 24 21:02:36 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 24 21:02:36 localhost systemd[1]: Starting Create System Users...
Nov 24 21:02:36 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 24 21:02:36 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 24 21:02:36 localhost systemd[1]: Mounted FUSE Control File System.
Nov 24 21:02:36 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 21:02:36 localhost systemd-journald[680]: Received client request to flush runtime journal.
Nov 24 21:02:36 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 24 21:02:36 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 24 21:02:36 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 21:02:36 localhost systemd[1]: Finished Create System Users.
Nov 24 21:02:36 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 24 21:02:36 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 24 21:02:37 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 21:02:37 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 24 21:02:37 localhost systemd[1]: Reached target Local File Systems.
Nov 24 21:02:37 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 24 21:02:37 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 24 21:02:37 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 24 21:02:37 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 24 21:02:37 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 24 21:02:37 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 24 21:02:37 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 21:02:37 localhost bootctl[699]: Couldn't find EFI system partition, skipping.
Nov 24 21:02:37 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 24 21:02:37 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 21:02:37 localhost systemd[1]: Starting Security Auditing Service...
Nov 24 21:02:37 localhost systemd[1]: Starting RPC Bind...
Nov 24 21:02:37 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 24 21:02:37 localhost auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 24 21:02:37 localhost auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 24 21:02:37 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 24 21:02:37 localhost augenrules[710]: /sbin/augenrules: No change
Nov 24 21:02:37 localhost systemd[1]: Started RPC Bind.
Nov 24 21:02:37 localhost augenrules[725]: No rules
Nov 24 21:02:37 localhost augenrules[725]: enabled 1
Nov 24 21:02:37 localhost augenrules[725]: failure 1
Nov 24 21:02:37 localhost augenrules[725]: pid 705
Nov 24 21:02:37 localhost augenrules[725]: rate_limit 0
Nov 24 21:02:37 localhost augenrules[725]: backlog_limit 8192
Nov 24 21:02:37 localhost augenrules[725]: lost 0
Nov 24 21:02:37 localhost augenrules[725]: backlog 0
Nov 24 21:02:37 localhost augenrules[725]: backlog_wait_time 60000
Nov 24 21:02:37 localhost augenrules[725]: backlog_wait_time_actual 0
Nov 24 21:02:37 localhost augenrules[725]: enabled 1
Nov 24 21:02:37 localhost augenrules[725]: failure 1
Nov 24 21:02:37 localhost augenrules[725]: pid 705
Nov 24 21:02:37 localhost augenrules[725]: rate_limit 0
Nov 24 21:02:37 localhost augenrules[725]: backlog_limit 8192
Nov 24 21:02:37 localhost augenrules[725]: lost 0
Nov 24 21:02:37 localhost augenrules[725]: backlog 4
Nov 24 21:02:37 localhost augenrules[725]: backlog_wait_time 60000
Nov 24 21:02:37 localhost augenrules[725]: backlog_wait_time_actual 0
Nov 24 21:02:37 localhost augenrules[725]: enabled 1
Nov 24 21:02:37 localhost augenrules[725]: failure 1
Nov 24 21:02:37 localhost augenrules[725]: pid 705
Nov 24 21:02:37 localhost augenrules[725]: rate_limit 0
Nov 24 21:02:37 localhost augenrules[725]: backlog_limit 8192
Nov 24 21:02:37 localhost augenrules[725]: lost 0
Nov 24 21:02:37 localhost augenrules[725]: backlog 0
Nov 24 21:02:37 localhost augenrules[725]: backlog_wait_time 60000
Nov 24 21:02:37 localhost augenrules[725]: backlog_wait_time_actual 0
Nov 24 21:02:37 localhost systemd[1]: Started Security Auditing Service.
Nov 24 21:02:37 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 24 21:02:37 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 24 21:02:37 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 24 21:02:37 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 21:02:37 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 21:02:37 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 21:02:37 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 21:02:37 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 24 21:02:37 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 21:02:37 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 21:02:37 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 24 21:02:37 localhost systemd-udevd[746]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 21:02:37 localhost systemd[1]: Starting Update is Completed...
Nov 24 21:02:37 localhost systemd[1]: Finished Update is Completed.
Nov 24 21:02:37 localhost systemd[1]: Reached target System Initialization.
Nov 24 21:02:37 localhost systemd[1]: Started dnf makecache --timer.
Nov 24 21:02:37 localhost systemd[1]: Started Daily rotation of log files.
Nov 24 21:02:37 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 24 21:02:37 localhost systemd[1]: Reached target Timer Units.
Nov 24 21:02:37 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 24 21:02:37 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 24 21:02:37 localhost systemd[1]: Reached target Socket Units.
Nov 24 21:02:37 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 24 21:02:37 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 24 21:02:37 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 21:02:37 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 24 21:02:37 localhost systemd[1]: Reached target Basic System.
Nov 24 21:02:37 localhost dbus-broker-lau[779]: Ready
Nov 24 21:02:37 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 24 21:02:37 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 24 21:02:37 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 24 21:02:37 localhost systemd[1]: Starting NTP client/server...
Nov 24 21:02:37 localhost kernel: kvm_amd: TSC scaling supported
Nov 24 21:02:37 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 24 21:02:37 localhost kernel: kvm_amd: Nested Paging enabled
Nov 24 21:02:37 localhost kernel: kvm_amd: LBR virtualization supported
Nov 24 21:02:37 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 24 21:02:37 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 24 21:02:37 localhost kernel: Console: switching to colour dummy device 80x25
Nov 24 21:02:37 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 24 21:02:37 localhost kernel: [drm] features: -context_init
Nov 24 21:02:37 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 24 21:02:37 localhost kernel: [drm] number of scanouts: 1
Nov 24 21:02:37 localhost kernel: [drm] number of cap sets: 0
Nov 24 21:02:37 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 24 21:02:37 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 24 21:02:37 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 24 21:02:37 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 24 21:02:37 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 24 21:02:37 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 24 21:02:37 localhost systemd[1]: Started irqbalance daemon.
Nov 24 21:02:37 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 24 21:02:37 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 21:02:37 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 21:02:37 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 21:02:37 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 24 21:02:37 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 24 21:02:37 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 24 21:02:37 localhost systemd[1]: Starting User Login Management...
Nov 24 21:02:37 localhost chronyd[805]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 21:02:37 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 24 21:02:37 localhost chronyd[805]: Loaded 0 symmetric keys
Nov 24 21:02:37 localhost chronyd[805]: Using right/UTC timezone to obtain leap second data
Nov 24 21:02:37 localhost chronyd[805]: Loaded seccomp filter (level 2)
Nov 24 21:02:37 localhost systemd[1]: Started NTP client/server.
Nov 24 21:02:37 localhost systemd-logind[806]: New seat seat0.
Nov 24 21:02:37 localhost systemd-logind[806]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 21:02:37 localhost systemd-logind[806]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 21:02:37 localhost systemd[1]: Started User Login Management.
Nov 24 21:02:37 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 24 21:02:37 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 24 21:02:38 localhost iptables.init[796]: iptables: Applying firewall rules: [  OK  ]
Nov 24 21:02:38 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 24 21:02:39 localhost cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 24 Nov 2025 21:02:39 +0000. Up 8.07 seconds.
Nov 24 21:02:39 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 24 21:02:39 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 24 21:02:39 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpd5xa8q8e.mount: Deactivated successfully.
Nov 24 21:02:39 localhost systemd[1]: Starting Hostname Service...
Nov 24 21:02:39 localhost systemd[1]: Started Hostname Service.
Nov 24 21:02:39 np0005534070.novalocal systemd-hostnamed[856]: Hostname set to <np0005534070.novalocal> (static)
Nov 24 21:02:39 np0005534070.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 24 21:02:39 np0005534070.novalocal systemd[1]: Reached target Preparation for Network.
Nov 24 21:02:39 np0005534070.novalocal systemd[1]: Starting Network Manager...
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.0521] NetworkManager (version 1.54.1-1.el9) is starting... (boot:6af3ca85-64ae-4c3b-bcae-1314bd1d1259)
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.0528] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.0763] manager[0x558c0aefc080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.0833] hostname: hostname: using hostnamed
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.0835] hostname: static hostname changed from (none) to "np0005534070.novalocal"
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.0842] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1057] manager[0x558c0aefc080]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1057] manager[0x558c0aefc080]: rfkill: WWAN hardware radio set enabled
Nov 24 21:02:40 np0005534070.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1187] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1189] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1190] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1193] manager: Networking is enabled by state file
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1198] settings: Loaded settings plugin: keyfile (internal)
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1242] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1281] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1330] dhcp: init: Using DHCP client 'internal'
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1335] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1350] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1364] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1374] device (lo): Activation: starting connection 'lo' (9e721d51-16df-4701-8382-ea90a88a1946)
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1386] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1390] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1425] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1429] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1432] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1434] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1435] device (eth0): carrier: link connected
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1438] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1445] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 21:02:40 np0005534070.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1452] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1457] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1458] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1460] manager: NetworkManager state is now CONNECTING
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1462] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1471] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1475] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 21:02:40 np0005534070.novalocal systemd[1]: Started Network Manager.
Nov 24 21:02:40 np0005534070.novalocal systemd[1]: Reached target Network.
Nov 24 21:02:40 np0005534070.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 24 21:02:40 np0005534070.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 24 21:02:40 np0005534070.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1683] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1686] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 21:02:40 np0005534070.novalocal NetworkManager[860]: <info>  [1764018160.1697] device (lo): Activation: successful, device activated.
Nov 24 21:02:40 np0005534070.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 24 21:02:40 np0005534070.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 21:02:40 np0005534070.novalocal systemd[1]: Reached target NFS client services.
Nov 24 21:02:40 np0005534070.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 21:02:40 np0005534070.novalocal systemd[1]: Reached target Remote File Systems.
Nov 24 21:02:40 np0005534070.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 21:02:41 np0005534070.novalocal NetworkManager[860]: <info>  [1764018161.5082] dhcp4 (eth0): state changed new lease, address=38.102.83.66
Nov 24 21:02:41 np0005534070.novalocal NetworkManager[860]: <info>  [1764018161.5102] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 21:02:41 np0005534070.novalocal NetworkManager[860]: <info>  [1764018161.5129] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 21:02:41 np0005534070.novalocal NetworkManager[860]: <info>  [1764018161.5164] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 21:02:41 np0005534070.novalocal NetworkManager[860]: <info>  [1764018161.5165] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 21:02:41 np0005534070.novalocal NetworkManager[860]: <info>  [1764018161.5168] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 21:02:41 np0005534070.novalocal NetworkManager[860]: <info>  [1764018161.5172] device (eth0): Activation: successful, device activated.
Nov 24 21:02:41 np0005534070.novalocal NetworkManager[860]: <info>  [1764018161.5176] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 21:02:41 np0005534070.novalocal NetworkManager[860]: <info>  [1764018161.5178] manager: startup complete
Nov 24 21:02:41 np0005534070.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 24 21:02:41 np0005534070.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 24 Nov 2025 21:02:41 +0000. Up 10.49 seconds.
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: |  eth0  | True |         38.102.83.66         | 255.255.255.0 | global | fa:16:3e:5c:b1:29 |
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fe5c:b129/64 |       .       |  link  | fa:16:3e:5c:b1:29 |
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Nov 24 21:02:41 np0005534070.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 21:02:42 np0005534070.novalocal useradd[991]: new group: name=cloud-user, GID=1001
Nov 24 21:02:42 np0005534070.novalocal useradd[991]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 24 21:02:42 np0005534070.novalocal useradd[991]: add 'cloud-user' to group 'adm'
Nov 24 21:02:42 np0005534070.novalocal useradd[991]: add 'cloud-user' to group 'systemd-journal'
Nov 24 21:02:42 np0005534070.novalocal useradd[991]: add 'cloud-user' to shadow group 'adm'
Nov 24 21:02:42 np0005534070.novalocal useradd[991]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: Generating public/private rsa key pair.
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: The key fingerprint is:
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: SHA256:S2i6KMEeByZXWWyUBZ4npgdwOD0HiBMgL1MvJudf718 root@np0005534070.novalocal
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: The key's randomart image is:
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: +---[RSA 3072]----+
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |=oo+o*++.        |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |+o++=.=.         |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |+.=oo+= .        |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |.X.. + +         |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |+.o . = S        |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |.o o = o .       |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |..o o   o   E    |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |.. . . .   .     |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: | .. .   ...      |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: +----[SHA256]-----+
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: Generating public/private ecdsa key pair.
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: The key fingerprint is:
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: SHA256:xKqYXq6XXLkYvzXcaG848UVLSnJA2FGM1n4KBX9LaNo root@np0005534070.novalocal
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: The key's randomart image is:
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: +---[ECDSA 256]---+
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |       ++B.      |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |      ..=.+.     |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |       .o++ o    |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |       oo++o+.   |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |      ..S=E*..   |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |   o..o..oo o    |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |  o.o* .*+..     |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: | . o= ooooo      |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |  oo. .. o.      |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: +----[SHA256]-----+
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: Generating public/private ed25519 key pair.
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: The key fingerprint is:
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: SHA256:MZ72JTSF1R+lUuJS3IOnsNaAOkKvGtexVyX52zOYwN8 root@np0005534070.novalocal
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: The key's randomart image is:
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: +--[ED25519 256]--+
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |         . ==+. o|
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |    .   . *++o=o |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |   . . .o.+Ooo.o.|
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |    . =. =*o+.  .|
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |     + +So.o.*   |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |  . o o... o= E  |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |   +   .  .    o |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |  .              |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: |                 |
Nov 24 21:02:43 np0005534070.novalocal cloud-init[924]: +----[SHA256]-----+
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Reached target Network is Online.
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Starting System Logging Service...
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 24 21:02:43 np0005534070.novalocal sm-notify[1008]: Version 2.5.4 starting
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Starting Permit User Sessions...
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 24 21:02:43 np0005534070.novalocal sshd[1010]: Server listening on 0.0.0.0 port 22.
Nov 24 21:02:43 np0005534070.novalocal sshd[1010]: Server listening on :: port 22.
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Finished Permit User Sessions.
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Started Command Scheduler.
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Started Getty on tty1.
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Reached target Login Prompts.
Nov 24 21:02:43 np0005534070.novalocal crond[1014]: (CRON) STARTUP (1.5.7)
Nov 24 21:02:43 np0005534070.novalocal crond[1014]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 24 21:02:43 np0005534070.novalocal crond[1014]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 44% if used.)
Nov 24 21:02:43 np0005534070.novalocal crond[1014]: (CRON) INFO (running with inotify support)
Nov 24 21:02:43 np0005534070.novalocal rsyslogd[1009]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1009" x-info="https://www.rsyslog.com"] start
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Started System Logging Service.
Nov 24 21:02:43 np0005534070.novalocal rsyslogd[1009]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Reached target Multi-User System.
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 24 21:02:43 np0005534070.novalocal rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:02:43 np0005534070.novalocal kdumpctl[1019]: kdump: No kdump initial ramdisk found.
Nov 24 21:02:43 np0005534070.novalocal kdumpctl[1019]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 24 21:02:43 np0005534070.novalocal cloud-init[1132]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 24 Nov 2025 21:02:43 +0000. Up 12.43 seconds.
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 24 21:02:43 np0005534070.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 24 21:02:44 np0005534070.novalocal dracut[1272]: dracut-057-102.git20250818.el9
Nov 24 21:02:44 np0005534070.novalocal cloud-init[1288]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 24 Nov 2025 21:02:44 +0000. Up 12.86 seconds.
Nov 24 21:02:44 np0005534070.novalocal sshd-session[1289]: Connection reset by 38.102.83.114 port 42100 [preauth]
Nov 24 21:02:44 np0005534070.novalocal sshd-session[1291]: Unable to negotiate with 38.102.83.114 port 42110: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 24 21:02:44 np0005534070.novalocal cloud-init[1296]: #############################################################
Nov 24 21:02:44 np0005534070.novalocal cloud-init[1297]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 24 21:02:44 np0005534070.novalocal sshd-session[1293]: Connection reset by 38.102.83.114 port 42116 [preauth]
Nov 24 21:02:44 np0005534070.novalocal cloud-init[1300]: 256 SHA256:xKqYXq6XXLkYvzXcaG848UVLSnJA2FGM1n4KBX9LaNo root@np0005534070.novalocal (ECDSA)
Nov 24 21:02:44 np0005534070.novalocal sshd-session[1299]: Unable to negotiate with 38.102.83.114 port 42126: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 24 21:02:44 np0005534070.novalocal cloud-init[1305]: 256 SHA256:MZ72JTSF1R+lUuJS3IOnsNaAOkKvGtexVyX52zOYwN8 root@np0005534070.novalocal (ED25519)
Nov 24 21:02:44 np0005534070.novalocal cloud-init[1308]: 3072 SHA256:S2i6KMEeByZXWWyUBZ4npgdwOD0HiBMgL1MvJudf718 root@np0005534070.novalocal (RSA)
Nov 24 21:02:44 np0005534070.novalocal sshd-session[1304]: Unable to negotiate with 38.102.83.114 port 42132: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 24 21:02:44 np0005534070.novalocal cloud-init[1310]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 24 21:02:44 np0005534070.novalocal cloud-init[1314]: #############################################################
Nov 24 21:02:44 np0005534070.novalocal cloud-init[1288]: Cloud-init v. 24.4-7.el9 finished at Mon, 24 Nov 2025 21:02:44 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.04 seconds
Nov 24 21:02:44 np0005534070.novalocal sshd-session[1323]: Connection closed by 38.102.83.114 port 42150 [preauth]
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 24 21:02:44 np0005534070.novalocal sshd-session[1329]: Unable to negotiate with 38.102.83.114 port 42152: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 24 21:02:44 np0005534070.novalocal sshd-session[1336]: Unable to negotiate with 38.102.83.114 port 42168: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 24 21:02:44 np0005534070.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 24 21:02:44 np0005534070.novalocal systemd[1]: Reached target Cloud-init target.
Nov 24 21:02:44 np0005534070.novalocal sshd-session[1312]: Connection closed by 38.102.83.114 port 42140 [preauth]
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 21:02:44 np0005534070.novalocal dracut[1274]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: memstrack is not available
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: memstrack is not available
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 21:02:45 np0005534070.novalocal dracut[1274]: *** Including module: systemd ***
Nov 24 21:02:46 np0005534070.novalocal dracut[1274]: *** Including module: fips ***
Nov 24 21:02:46 np0005534070.novalocal chronyd[805]: Selected source 51.222.12.92 (2.centos.pool.ntp.org)
Nov 24 21:02:46 np0005534070.novalocal chronyd[805]: System clock wrong by 1.129219 seconds
Nov 24 21:02:47 np0005534070.novalocal chronyd[805]: System clock was stepped by 1.129219 seconds
Nov 24 21:02:47 np0005534070.novalocal chronyd[805]: System clock TAI offset set to 37 seconds
Nov 24 21:02:47 np0005534070.novalocal dracut[1274]: *** Including module: systemd-initrd ***
Nov 24 21:02:47 np0005534070.novalocal dracut[1274]: *** Including module: i18n ***
Nov 24 21:02:47 np0005534070.novalocal dracut[1274]: *** Including module: drm ***
Nov 24 21:02:48 np0005534070.novalocal dracut[1274]: *** Including module: prefixdevname ***
Nov 24 21:02:48 np0005534070.novalocal dracut[1274]: *** Including module: kernel-modules ***
Nov 24 21:02:48 np0005534070.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 24 21:02:49 np0005534070.novalocal dracut[1274]: *** Including module: kernel-modules-extra ***
Nov 24 21:02:49 np0005534070.novalocal dracut[1274]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 24 21:02:49 np0005534070.novalocal dracut[1274]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 24 21:02:49 np0005534070.novalocal dracut[1274]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 24 21:02:49 np0005534070.novalocal dracut[1274]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 24 21:02:49 np0005534070.novalocal dracut[1274]: *** Including module: qemu ***
Nov 24 21:02:49 np0005534070.novalocal dracut[1274]: *** Including module: fstab-sys ***
Nov 24 21:02:49 np0005534070.novalocal dracut[1274]: *** Including module: rootfs-block ***
Nov 24 21:02:49 np0005534070.novalocal dracut[1274]: *** Including module: terminfo ***
Nov 24 21:02:49 np0005534070.novalocal dracut[1274]: *** Including module: udev-rules ***
Nov 24 21:02:49 np0005534070.novalocal irqbalance[801]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 24 21:02:49 np0005534070.novalocal irqbalance[801]: IRQ 25 affinity is now unmanaged
Nov 24 21:02:49 np0005534070.novalocal irqbalance[801]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 24 21:02:49 np0005534070.novalocal irqbalance[801]: IRQ 31 affinity is now unmanaged
Nov 24 21:02:49 np0005534070.novalocal irqbalance[801]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 24 21:02:49 np0005534070.novalocal irqbalance[801]: IRQ 28 affinity is now unmanaged
Nov 24 21:02:49 np0005534070.novalocal irqbalance[801]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 24 21:02:49 np0005534070.novalocal irqbalance[801]: IRQ 32 affinity is now unmanaged
Nov 24 21:02:49 np0005534070.novalocal irqbalance[801]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 24 21:02:49 np0005534070.novalocal irqbalance[801]: IRQ 30 affinity is now unmanaged
Nov 24 21:02:49 np0005534070.novalocal irqbalance[801]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 24 21:02:49 np0005534070.novalocal irqbalance[801]: IRQ 29 affinity is now unmanaged
Nov 24 21:02:49 np0005534070.novalocal dracut[1274]: Skipping udev rule: 91-permissions.rules
Nov 24 21:02:49 np0005534070.novalocal dracut[1274]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 24 21:02:50 np0005534070.novalocal dracut[1274]: *** Including module: virtiofs ***
Nov 24 21:02:50 np0005534070.novalocal dracut[1274]: *** Including module: dracut-systemd ***
Nov 24 21:02:50 np0005534070.novalocal dracut[1274]: *** Including module: usrmount ***
Nov 24 21:02:50 np0005534070.novalocal dracut[1274]: *** Including module: base ***
Nov 24 21:02:50 np0005534070.novalocal dracut[1274]: *** Including module: fs-lib ***
Nov 24 21:02:50 np0005534070.novalocal dracut[1274]: *** Including module: kdumpbase ***
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:   microcode_ctl module: mangling fw_dir
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: configuration "intel" is ignored
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]: *** Including module: openssl ***
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]: *** Including module: shutdown ***
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]: *** Including module: squash ***
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]: *** Including modules done ***
Nov 24 21:02:51 np0005534070.novalocal dracut[1274]: *** Installing kernel module dependencies ***
Nov 24 21:02:52 np0005534070.novalocal dracut[1274]: *** Installing kernel module dependencies done ***
Nov 24 21:02:52 np0005534070.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 21:02:52 np0005534070.novalocal dracut[1274]: *** Resolving executable dependencies ***
Nov 24 21:02:54 np0005534070.novalocal dracut[1274]: *** Resolving executable dependencies done ***
Nov 24 21:02:54 np0005534070.novalocal dracut[1274]: *** Generating early-microcode cpio image ***
Nov 24 21:02:54 np0005534070.novalocal dracut[1274]: *** Store current command line parameters ***
Nov 24 21:02:54 np0005534070.novalocal dracut[1274]: Stored kernel commandline:
Nov 24 21:02:54 np0005534070.novalocal dracut[1274]: No dracut internal kernel commandline stored in the initramfs
Nov 24 21:02:54 np0005534070.novalocal dracut[1274]: *** Install squash loader ***
Nov 24 21:02:55 np0005534070.novalocal dracut[1274]: *** Squashing the files inside the initramfs ***
Nov 24 21:02:56 np0005534070.novalocal dracut[1274]: *** Squashing the files inside the initramfs done ***
Nov 24 21:02:56 np0005534070.novalocal dracut[1274]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 24 21:02:56 np0005534070.novalocal dracut[1274]: *** Hardlinking files ***
Nov 24 21:02:56 np0005534070.novalocal dracut[1274]: Mode:           real
Nov 24 21:02:56 np0005534070.novalocal dracut[1274]: Files:          50
Nov 24 21:02:57 np0005534070.novalocal dracut[1274]: Linked:         0 files
Nov 24 21:02:57 np0005534070.novalocal dracut[1274]: Compared:       0 xattrs
Nov 24 21:02:57 np0005534070.novalocal dracut[1274]: Compared:       0 files
Nov 24 21:02:57 np0005534070.novalocal dracut[1274]: Saved:          0 B
Nov 24 21:02:57 np0005534070.novalocal dracut[1274]: Duration:       0.000647 seconds
Nov 24 21:02:57 np0005534070.novalocal dracut[1274]: *** Hardlinking files done ***
Nov 24 21:02:57 np0005534070.novalocal dracut[1274]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 24 21:02:58 np0005534070.novalocal kdumpctl[1019]: kdump: kexec: loaded kdump kernel
Nov 24 21:02:58 np0005534070.novalocal kdumpctl[1019]: kdump: Starting kdump: [OK]
Nov 24 21:02:58 np0005534070.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 24 21:02:58 np0005534070.novalocal systemd[1]: Startup finished in 1.586s (kernel) + 2.932s (initrd) + 21.078s (userspace) = 25.597s.
Nov 24 21:03:03 np0005534070.novalocal sshd-session[4299]: Accepted publickey for zuul from 38.102.83.114 port 39476 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 24 21:03:03 np0005534070.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 24 21:03:03 np0005534070.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 24 21:03:03 np0005534070.novalocal systemd-logind[806]: New session 1 of user zuul.
Nov 24 21:03:03 np0005534070.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 24 21:03:03 np0005534070.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Queued start job for default target Main User Target.
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Created slice User Application Slice.
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Reached target Paths.
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Reached target Timers.
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Starting D-Bus User Message Bus Socket...
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Starting Create User's Volatile Files and Directories...
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Finished Create User's Volatile Files and Directories.
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Listening on D-Bus User Message Bus Socket.
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Reached target Sockets.
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Reached target Basic System.
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Reached target Main User Target.
Nov 24 21:03:03 np0005534070.novalocal systemd[4303]: Startup finished in 280ms.
Nov 24 21:03:03 np0005534070.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 24 21:03:03 np0005534070.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 24 21:03:03 np0005534070.novalocal sshd-session[4299]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:03:04 np0005534070.novalocal python3[4385]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:03:06 np0005534070.novalocal python3[4413]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:03:11 np0005534070.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 21:03:12 np0005534070.novalocal python3[4474]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:03:13 np0005534070.novalocal python3[4514]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 24 21:03:15 np0005534070.novalocal python3[4542]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDADdpjxAP81wtg5jH1+8EUWeoGDaEaKcHFfjZBD9t1p3YY8z1eu0y4D1jpIFsvE6slrD69PiaDuooceCkYICR+7s3rd8APPa/fS0UpPl595Sw3N+AbGlF6ODV8GqbFxqgtjmT6ozLSSF13pG87KeywQPqWq6idUwDoOddKDijT9d00Tf4RFeOzu/8PpoSymPK6vlZBZlDzrKuNTvPULNhPY8r1SDGvVNrxCfa5z/W1lW+McvwnvQFXIqdVK77Pmmgf5R+7pEamjeDF8ORZMhqX8AJRzsmW5SbNnJ+hP1hYGMHcF8d9/evoykcdHjWa9by0ihSH/wPoLAZ1oECW2Ylvn0RlZRFXPRl2r1GqsUt/5ZHn2lmhhCaJwtfln2OO11ZKHTyXHWF5oYKhQGIxxkgjD7ALkZJXJ8s0SV3sOREA+ZdEiA029vkI0FaBqXrtbMVJUx8qmfDTNQuA6uDvxP1vsjFKUawLu8hvwY5umoAhZf0nKhZ4a9+q5/hN0/GpDqc= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:15 np0005534070.novalocal python3[4566]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:16 np0005534070.novalocal python3[4665]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:03:16 np0005534070.novalocal python3[4736]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764018195.7433262-207-207831611177597/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=cf6ad720746b4efaae525da76486901b_id_rsa follow=False checksum=d4d347c15b55962fae67d6ea0ff7b6a8b7fa663d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:17 np0005534070.novalocal python3[4859]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:03:17 np0005534070.novalocal sshd-session[4517]: Received disconnect from 116.71.136.125 port 59532:11: Bye Bye [preauth]
Nov 24 21:03:17 np0005534070.novalocal sshd-session[4517]: Disconnected from authenticating user root 116.71.136.125 port 59532 [preauth]
Nov 24 21:03:17 np0005534070.novalocal python3[4930]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764018196.6867905-240-123820794784824/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=cf6ad720746b4efaae525da76486901b_id_rsa.pub follow=False checksum=c082ed6ba42f2e9f0757fd308c23fd9b40756d29 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:18 np0005534070.novalocal python3[4978]: ansible-ping Invoked with data=pong
Nov 24 21:03:19 np0005534070.novalocal python3[5002]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:03:21 np0005534070.novalocal python3[5060]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 24 21:03:22 np0005534070.novalocal python3[5092]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:22 np0005534070.novalocal python3[5116]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:22 np0005534070.novalocal python3[5140]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:22 np0005534070.novalocal python3[5164]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:23 np0005534070.novalocal python3[5188]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:23 np0005534070.novalocal python3[5212]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:24 np0005534070.novalocal sudo[5236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yisekxpomfkzzxxhgoczdbytuakwkjdg ; /usr/bin/python3'
Nov 24 21:03:24 np0005534070.novalocal sudo[5236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:25 np0005534070.novalocal python3[5238]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:25 np0005534070.novalocal sudo[5236]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:25 np0005534070.novalocal sudo[5314]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjluerpsgsdoyfnzywgzgjksjhrtshet ; /usr/bin/python3'
Nov 24 21:03:25 np0005534070.novalocal sudo[5314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:25 np0005534070.novalocal python3[5316]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:03:25 np0005534070.novalocal sudo[5314]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:26 np0005534070.novalocal sudo[5387]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rekieejlcynwbstqbcccpzuboavclpny ; /usr/bin/python3'
Nov 24 21:03:26 np0005534070.novalocal sudo[5387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:26 np0005534070.novalocal python3[5389]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764018205.2509048-21-153227857170041/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:26 np0005534070.novalocal sudo[5387]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:26 np0005534070.novalocal python3[5437]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:27 np0005534070.novalocal python3[5461]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:27 np0005534070.novalocal python3[5485]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:27 np0005534070.novalocal python3[5509]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:28 np0005534070.novalocal python3[5533]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:28 np0005534070.novalocal python3[5557]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:28 np0005534070.novalocal python3[5581]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:28 np0005534070.novalocal python3[5605]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:29 np0005534070.novalocal python3[5629]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:29 np0005534070.novalocal python3[5653]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:29 np0005534070.novalocal python3[5677]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:30 np0005534070.novalocal python3[5701]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:30 np0005534070.novalocal python3[5725]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:30 np0005534070.novalocal python3[5749]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:31 np0005534070.novalocal python3[5773]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:31 np0005534070.novalocal python3[5797]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:31 np0005534070.novalocal python3[5821]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:32 np0005534070.novalocal python3[5845]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:32 np0005534070.novalocal python3[5869]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:32 np0005534070.novalocal python3[5893]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:32 np0005534070.novalocal python3[5917]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:33 np0005534070.novalocal python3[5941]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:33 np0005534070.novalocal python3[5965]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:33 np0005534070.novalocal python3[5989]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:34 np0005534070.novalocal python3[6013]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:34 np0005534070.novalocal python3[6037]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:03:36 np0005534070.novalocal sudo[6061]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yehcziisycqztvsowezfwjrbrrytgwhj ; /usr/bin/python3'
Nov 24 21:03:36 np0005534070.novalocal sudo[6061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:36 np0005534070.novalocal python3[6063]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 21:03:36 np0005534070.novalocal systemd[1]: Starting Time & Date Service...
Nov 24 21:03:36 np0005534070.novalocal systemd[1]: Started Time & Date Service.
Nov 24 21:03:36 np0005534070.novalocal systemd-timedated[6065]: Changed time zone to 'UTC' (UTC).
Nov 24 21:03:36 np0005534070.novalocal sudo[6061]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:38 np0005534070.novalocal sudo[6092]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tssjzbrbqahnnqybpbweiwwtgckbzrcg ; /usr/bin/python3'
Nov 24 21:03:38 np0005534070.novalocal sudo[6092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:38 np0005534070.novalocal python3[6094]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:38 np0005534070.novalocal sudo[6092]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:38 np0005534070.novalocal python3[6170]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:03:39 np0005534070.novalocal python3[6241]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764018218.4541323-153-122579076125576/source _original_basename=tmp0dadpemf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:39 np0005534070.novalocal python3[6341]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:03:40 np0005534070.novalocal python3[6412]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764018219.394766-183-57131804494746/source _original_basename=tmp2kb_my5d follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:40 np0005534070.novalocal sudo[6512]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czqznzcjzqamwzyvuybiqkhripkirmjk ; /usr/bin/python3'
Nov 24 21:03:40 np0005534070.novalocal sudo[6512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:40 np0005534070.novalocal python3[6514]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:03:40 np0005534070.novalocal sudo[6512]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:41 np0005534070.novalocal sudo[6585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aflxaepwmotecomngvimgvnzhtvdqxga ; /usr/bin/python3'
Nov 24 21:03:41 np0005534070.novalocal sudo[6585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:41 np0005534070.novalocal python3[6587]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764018220.4655402-231-239120907278768/source _original_basename=tmp64_vcu4a follow=False checksum=a972263c83cf44cabc7754859f6611771d0cc68d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:41 np0005534070.novalocal sudo[6585]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:41 np0005534070.novalocal python3[6635]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:03:42 np0005534070.novalocal python3[6661]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:03:42 np0005534070.novalocal sudo[6739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cebbfcwamepthybtpzwoylmqxpvbpghs ; /usr/bin/python3'
Nov 24 21:03:42 np0005534070.novalocal sudo[6739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:42 np0005534070.novalocal python3[6741]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:03:42 np0005534070.novalocal sudo[6739]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:42 np0005534070.novalocal sudo[6812]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmvyhmmbstwqnkvmadycvthnrnbptfbe ; /usr/bin/python3'
Nov 24 21:03:42 np0005534070.novalocal sudo[6812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:42 np0005534070.novalocal python3[6814]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764018222.158124-273-48729776145618/source _original_basename=tmpk1l24gfa follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:03:42 np0005534070.novalocal sudo[6812]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:43 np0005534070.novalocal sudo[6863]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwgnmnrmxxigafplggsoxtckdgfbhvcu ; /usr/bin/python3'
Nov 24 21:03:43 np0005534070.novalocal sudo[6863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:43 np0005534070.novalocal python3[6865]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-8eae-7943-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:03:43 np0005534070.novalocal sudo[6863]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:44 np0005534070.novalocal python3[6893]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-8eae-7943-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 24 21:03:45 np0005534070.novalocal python3[6921]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:04:00 np0005534070.novalocal sshd-session[6922]: Received disconnect from 190.129.114.222 port 41902:11: Bye Bye [preauth]
Nov 24 21:04:00 np0005534070.novalocal sshd-session[6922]: Disconnected from authenticating user root 190.129.114.222 port 41902 [preauth]
Nov 24 21:04:02 np0005534070.novalocal sudo[6947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtbifqkecdhqvtgymomrfzssrofjahrb ; /usr/bin/python3'
Nov 24 21:04:02 np0005534070.novalocal sudo[6947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:04:02 np0005534070.novalocal python3[6949]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:04:02 np0005534070.novalocal sudo[6947]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:06 np0005534070.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 21:04:18 np0005534070.novalocal sshd-session[6955]: Invalid user solana from 45.148.10.240 port 55698
Nov 24 21:04:18 np0005534070.novalocal sshd-session[6955]: Connection closed by invalid user solana 45.148.10.240 port 55698 [preauth]
Nov 24 21:04:39 np0005534070.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 21:04:39 np0005534070.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 24 21:04:39 np0005534070.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 24 21:04:39 np0005534070.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 24 21:04:39 np0005534070.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 24 21:04:39 np0005534070.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 24 21:04:39 np0005534070.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 24 21:04:39 np0005534070.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 24 21:04:39 np0005534070.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 24 21:04:39 np0005534070.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 24 21:04:39 np0005534070.novalocal NetworkManager[860]: <info>  [1764018279.6362] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 21:04:39 np0005534070.novalocal systemd-udevd[6957]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 21:04:39 np0005534070.novalocal NetworkManager[860]: <info>  [1764018279.6572] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 21:04:39 np0005534070.novalocal NetworkManager[860]: <info>  [1764018279.6603] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 24 21:04:39 np0005534070.novalocal NetworkManager[860]: <info>  [1764018279.6608] device (eth1): carrier: link connected
Nov 24 21:04:39 np0005534070.novalocal NetworkManager[860]: <info>  [1764018279.6611] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 21:04:39 np0005534070.novalocal NetworkManager[860]: <info>  [1764018279.6617] policy: auto-activating connection 'Wired connection 1' (e2d4b87a-173d-3177-ab54-3ebbe6b2891a)
Nov 24 21:04:39 np0005534070.novalocal NetworkManager[860]: <info>  [1764018279.6622] device (eth1): Activation: starting connection 'Wired connection 1' (e2d4b87a-173d-3177-ab54-3ebbe6b2891a)
Nov 24 21:04:39 np0005534070.novalocal NetworkManager[860]: <info>  [1764018279.6623] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:04:39 np0005534070.novalocal NetworkManager[860]: <info>  [1764018279.6626] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:04:39 np0005534070.novalocal NetworkManager[860]: <info>  [1764018279.6631] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:04:39 np0005534070.novalocal NetworkManager[860]: <info>  [1764018279.6637] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 21:04:40 np0005534070.novalocal python3[6984]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-6c85-dc82-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:04:47 np0005534070.novalocal sudo[7062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnvzeihwhkbiznddrovsihpyltcpueai ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 21:04:47 np0005534070.novalocal sudo[7062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:04:47 np0005534070.novalocal python3[7064]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:04:47 np0005534070.novalocal sudo[7062]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:47 np0005534070.novalocal sudo[7135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opthfttazadxztcaruorefwpxpooqcma ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 21:04:47 np0005534070.novalocal sudo[7135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:04:47 np0005534070.novalocal python3[7137]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764018287.0518718-102-195643105404905/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=10bea9693224c93c7808a4a93f35c4889ea62e4d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:04:47 np0005534070.novalocal sudo[7135]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:48 np0005534070.novalocal sudo[7185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoyyrhllugtupnnzdkpptodjsndooauq ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 21:04:48 np0005534070.novalocal sudo[7185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:04:48 np0005534070.novalocal python3[7187]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:04:48 np0005534070.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 21:04:48 np0005534070.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 24 21:04:48 np0005534070.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 24 21:04:48 np0005534070.novalocal systemd[1]: Stopping Network Manager...
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[860]: <info>  [1764018288.7340] caught SIGTERM, shutting down normally.
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[860]: <info>  [1764018288.7352] dhcp4 (eth0): canceled DHCP transaction
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[860]: <info>  [1764018288.7352] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[860]: <info>  [1764018288.7353] dhcp4 (eth0): state changed no lease
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[860]: <info>  [1764018288.7357] manager: NetworkManager state is now CONNECTING
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[860]: <info>  [1764018288.7514] dhcp4 (eth1): canceled DHCP transaction
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[860]: <info>  [1764018288.7514] dhcp4 (eth1): state changed no lease
Nov 24 21:04:48 np0005534070.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[860]: <info>  [1764018288.7572] exiting (success)
Nov 24 21:04:48 np0005534070.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 21:04:48 np0005534070.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 21:04:48 np0005534070.novalocal systemd[1]: Stopped Network Manager.
Nov 24 21:04:48 np0005534070.novalocal systemd[1]: Starting Network Manager...
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.8466] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:6af3ca85-64ae-4c3b-bcae-1314bd1d1259)
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.8471] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.8565] manager[0x55f6618f9070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 21:04:48 np0005534070.novalocal systemd[1]: Starting Hostname Service...
Nov 24 21:04:48 np0005534070.novalocal systemd[1]: Started Hostname Service.
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9689] hostname: hostname: using hostnamed
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9689] hostname: static hostname changed from (none) to "np0005534070.novalocal"
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9698] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9706] manager[0x55f6618f9070]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9707] manager[0x55f6618f9070]: rfkill: WWAN hardware radio set enabled
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9755] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9756] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9757] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9758] manager: Networking is enabled by state file
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9761] settings: Loaded settings plugin: keyfile (internal)
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9769] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9819] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9836] dhcp: init: Using DHCP client 'internal'
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9841] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9851] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9862] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9876] device (lo): Activation: starting connection 'lo' (9e721d51-16df-4701-8382-ea90a88a1946)
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9888] device (eth0): carrier: link connected
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9896] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9904] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9905] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9916] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9929] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9940] device (eth1): carrier: link connected
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9947] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9955] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (e2d4b87a-173d-3177-ab54-3ebbe6b2891a) (indicated)
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9956] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9965] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9976] device (eth1): Activation: starting connection 'Wired connection 1' (e2d4b87a-173d-3177-ab54-3ebbe6b2891a)
Nov 24 21:04:48 np0005534070.novalocal systemd[1]: Started Network Manager.
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9986] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9992] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 21:04:48 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018288.9999] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0004] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0009] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0015] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0020] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0026] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0032] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0046] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0051] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0066] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0072] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0100] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0109] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0120] device (lo): Activation: successful, device activated.
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0132] dhcp4 (eth0): state changed new lease, address=38.102.83.66
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0144] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0238] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 21:04:49 np0005534070.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0286] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0289] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0296] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0304] device (eth0): Activation: successful, device activated.
Nov 24 21:04:49 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018289.0312] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 21:04:49 np0005534070.novalocal sudo[7185]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:49 np0005534070.novalocal python3[7279]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-6c85-dc82-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:04:49 np0005534070.novalocal irqbalance[801]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 24 21:04:49 np0005534070.novalocal irqbalance[801]: IRQ 26 affinity is now unmanaged
Nov 24 21:04:59 np0005534070.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 21:05:04 np0005534070.novalocal sshd-session[7282]: Connection closed by 116.71.136.125 port 38598 [preauth]
Nov 24 21:05:18 np0005534070.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 21:05:25 np0005534070.novalocal sshd-session[7288]: Received disconnect from 190.129.114.222 port 38198:11: Bye Bye [preauth]
Nov 24 21:05:25 np0005534070.novalocal sshd-session[7288]: Disconnected from authenticating user root 190.129.114.222 port 38198 [preauth]
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.4558] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 21:05:34 np0005534070.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 21:05:34 np0005534070.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.4877] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.4886] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.4912] device (eth1): Activation: successful, device activated.
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.4929] manager: startup complete
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.4934] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <warn>  [1764018334.4959] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 24 21:05:34 np0005534070.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.4987] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.5114] dhcp4 (eth1): canceled DHCP transaction
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.5115] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.5115] dhcp4 (eth1): state changed no lease
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.5148] policy: auto-activating connection 'ci-private-network' (991fe21a-95d5-515f-bd84-4bf4dd24e652)
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.5158] device (eth1): Activation: starting connection 'ci-private-network' (991fe21a-95d5-515f-bd84-4bf4dd24e652)
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.5160] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.5168] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.5185] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.5204] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.5255] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.5258] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 21:05:34 np0005534070.novalocal NetworkManager[7199]: <info>  [1764018334.5269] device (eth1): Activation: successful, device activated.
Nov 24 21:05:44 np0005534070.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 21:05:49 np0005534070.novalocal sshd-session[4312]: Received disconnect from 38.102.83.114 port 39476:11: disconnected by user
Nov 24 21:05:49 np0005534070.novalocal sshd-session[4312]: Disconnected from user zuul 38.102.83.114 port 39476
Nov 24 21:05:49 np0005534070.novalocal sshd-session[4299]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:05:49 np0005534070.novalocal systemd-logind[806]: Session 1 logged out. Waiting for processes to exit.
Nov 24 21:05:50 np0005534070.novalocal sshd-session[7313]: Accepted publickey for zuul from 38.102.83.114 port 38816 ssh2: RSA SHA256:s+FA6JzLhmwu0B38HrAw2rBq4K/LlOWNM3GY6zwZaMs
Nov 24 21:05:50 np0005534070.novalocal systemd-logind[806]: New session 3 of user zuul.
Nov 24 21:05:50 np0005534070.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 24 21:05:50 np0005534070.novalocal sshd-session[7313]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:05:51 np0005534070.novalocal sudo[7392]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ichukqldynbcsrcjgeiqavrhsdkphkzi ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 21:05:51 np0005534070.novalocal sudo[7392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:05:51 np0005534070.novalocal python3[7394]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:05:51 np0005534070.novalocal sudo[7392]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:51 np0005534070.novalocal sudo[7465]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzjspsodaxmbsvruhjpqpfkrynixsypu ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 21:05:51 np0005534070.novalocal sudo[7465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:05:51 np0005534070.novalocal python3[7467]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764018350.9020238-259-160184288782850/source _original_basename=tmp1djx8nxl follow=False checksum=1d54ba0aad4e4e4163c5fc0a98d9bd7ae0d7a8d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:05:51 np0005534070.novalocal sudo[7465]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:54 np0005534070.novalocal sshd-session[7316]: Connection closed by 38.102.83.114 port 38816
Nov 24 21:05:54 np0005534070.novalocal sshd-session[7313]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:05:54 np0005534070.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 24 21:05:54 np0005534070.novalocal systemd-logind[806]: Session 3 logged out. Waiting for processes to exit.
Nov 24 21:05:54 np0005534070.novalocal systemd-logind[806]: Removed session 3.
Nov 24 21:05:56 np0005534070.novalocal systemd[4303]: Starting Mark boot as successful...
Nov 24 21:05:56 np0005534070.novalocal systemd[4303]: Finished Mark boot as successful.
Nov 24 21:06:06 np0005534070.novalocal sshd-session[7493]: Received disconnect from 80.94.93.233 port 34558:11:  [preauth]
Nov 24 21:06:06 np0005534070.novalocal sshd-session[7493]: Disconnected from authenticating user root 80.94.93.233 port 34558 [preauth]
Nov 24 21:06:10 np0005534070.novalocal sshd[1010]: Timeout before authentication for connection from 115.190.136.219 to 38.102.83.66, pid = 6953
Nov 24 21:06:18 np0005534070.novalocal sshd-session[7495]: Invalid user solana from 45.148.10.240 port 56696
Nov 24 21:06:19 np0005534070.novalocal sshd-session[7495]: Connection closed by invalid user solana 45.148.10.240 port 56696 [preauth]
Nov 24 21:06:37 np0005534070.novalocal sshd-session[7497]: Invalid user docker from 116.71.136.125 port 49282
Nov 24 21:06:37 np0005534070.novalocal sshd-session[7497]: Received disconnect from 116.71.136.125 port 49282:11: Bye Bye [preauth]
Nov 24 21:06:37 np0005534070.novalocal sshd-session[7497]: Disconnected from invalid user docker 116.71.136.125 port 49282 [preauth]
Nov 24 21:06:48 np0005534070.novalocal sshd-session[7499]: Invalid user steam from 190.129.114.222 port 42512
Nov 24 21:06:48 np0005534070.novalocal sshd-session[7499]: Received disconnect from 190.129.114.222 port 42512:11: Bye Bye [preauth]
Nov 24 21:06:48 np0005534070.novalocal sshd-session[7499]: Disconnected from invalid user steam 190.129.114.222 port 42512 [preauth]
Nov 24 21:07:08 np0005534070.novalocal sshd-session[7501]: Invalid user ftpuser from 80.94.95.115 port 43810
Nov 24 21:07:08 np0005534070.novalocal sshd-session[7501]: Connection closed by invalid user ftpuser 80.94.95.115 port 43810 [preauth]
Nov 24 21:08:10 np0005534070.novalocal sshd-session[7504]: Received disconnect from 190.129.114.222 port 46866:11: Bye Bye [preauth]
Nov 24 21:08:10 np0005534070.novalocal sshd-session[7504]: Disconnected from authenticating user root 190.129.114.222 port 46866 [preauth]
Nov 24 21:08:14 np0005534070.novalocal sshd-session[7506]: Invalid user django from 116.71.136.125 port 51964
Nov 24 21:08:14 np0005534070.novalocal sshd-session[7506]: Received disconnect from 116.71.136.125 port 51964:11: Bye Bye [preauth]
Nov 24 21:08:14 np0005534070.novalocal sshd-session[7506]: Disconnected from invalid user django 116.71.136.125 port 51964 [preauth]
Nov 24 21:08:22 np0005534070.novalocal sshd-session[7508]: Invalid user pbanx from 45.148.10.240 port 50858
Nov 24 21:08:22 np0005534070.novalocal sshd-session[7508]: Connection closed by invalid user pbanx 45.148.10.240 port 50858 [preauth]
Nov 24 21:08:56 np0005534070.novalocal systemd[4303]: Created slice User Background Tasks Slice.
Nov 24 21:08:56 np0005534070.novalocal systemd[4303]: Starting Cleanup of User's Temporary Files and Directories...
Nov 24 21:08:56 np0005534070.novalocal systemd[4303]: Finished Cleanup of User's Temporary Files and Directories.
Nov 24 21:09:35 np0005534070.novalocal sshd-session[7513]: Invalid user sonarqube from 190.129.114.222 port 38482
Nov 24 21:09:35 np0005534070.novalocal sshd-session[7513]: Received disconnect from 190.129.114.222 port 38482:11: Bye Bye [preauth]
Nov 24 21:09:35 np0005534070.novalocal sshd-session[7513]: Disconnected from invalid user sonarqube 190.129.114.222 port 38482 [preauth]
Nov 24 21:09:51 np0005534070.novalocal sshd-session[7515]: Invalid user work from 116.71.136.125 port 49292
Nov 24 21:09:51 np0005534070.novalocal sshd-session[7515]: Received disconnect from 116.71.136.125 port 49292:11: Bye Bye [preauth]
Nov 24 21:09:51 np0005534070.novalocal sshd-session[7515]: Disconnected from invalid user work 116.71.136.125 port 49292 [preauth]
Nov 24 21:10:27 np0005534070.novalocal sshd[1010]: Timeout before authentication for connection from 115.190.136.219 to 38.102.83.66, pid = 7510
Nov 24 21:10:32 np0005534070.novalocal sshd-session[7517]: Invalid user banxgg from 45.148.10.240 port 56780
Nov 24 21:10:32 np0005534070.novalocal sshd-session[7517]: Connection closed by invalid user banxgg 45.148.10.240 port 56780 [preauth]
Nov 24 21:10:59 np0005534070.novalocal sshd-session[7520]: Invalid user cat from 190.129.114.222 port 57240
Nov 24 21:11:00 np0005534070.novalocal sshd-session[7520]: Received disconnect from 190.129.114.222 port 57240:11: Bye Bye [preauth]
Nov 24 21:11:00 np0005534070.novalocal sshd-session[7520]: Disconnected from invalid user cat 190.129.114.222 port 57240 [preauth]
Nov 24 21:11:17 np0005534070.novalocal sshd-session[7522]: Connection closed by 58.59.233.160 port 49898
Nov 24 21:11:27 np0005534070.novalocal sshd-session[7526]: Received disconnect from 116.71.136.125 port 37352:11: Bye Bye [preauth]
Nov 24 21:11:27 np0005534070.novalocal sshd-session[7526]: Disconnected from authenticating user root 116.71.136.125 port 37352 [preauth]
Nov 24 21:11:30 np0005534070.novalocal sshd-session[7529]: Accepted publickey for zuul from 38.102.83.114 port 36010 ssh2: RSA SHA256:s+FA6JzLhmwu0B38HrAw2rBq4K/LlOWNM3GY6zwZaMs
Nov 24 21:11:30 np0005534070.novalocal systemd-logind[806]: New session 4 of user zuul.
Nov 24 21:11:30 np0005534070.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 24 21:11:30 np0005534070.novalocal sshd-session[7529]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:11:30 np0005534070.novalocal sudo[7556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwhrymwatyugsazdzhrsdpgifkjafubm ; /usr/bin/python3'
Nov 24 21:11:30 np0005534070.novalocal sudo[7556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:30 np0005534070.novalocal python3[7558]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-f545-6449-000000001ce2-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:11:30 np0005534070.novalocal sudo[7556]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:31 np0005534070.novalocal sudo[7584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmkerjfzyykgxbxqmfkqylsoffdntjlb ; /usr/bin/python3'
Nov 24 21:11:31 np0005534070.novalocal sudo[7584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:31 np0005534070.novalocal python3[7586]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:11:31 np0005534070.novalocal sudo[7584]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:31 np0005534070.novalocal sudo[7610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohluivocvflqtqfhvzgsmrbtftbekifg ; /usr/bin/python3'
Nov 24 21:11:31 np0005534070.novalocal sudo[7610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:31 np0005534070.novalocal python3[7613]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:11:31 np0005534070.novalocal sudo[7610]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:31 np0005534070.novalocal sudo[7637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcwqaqpjaltlkohtyiaxzbmjhpfwcsnk ; /usr/bin/python3'
Nov 24 21:11:31 np0005534070.novalocal sudo[7637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:31 np0005534070.novalocal sshd-session[7524]: Connection closed by 171.37.46.238 port 26527 [preauth]
Nov 24 21:11:31 np0005534070.novalocal python3[7639]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:11:31 np0005534070.novalocal sudo[7637]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:31 np0005534070.novalocal sudo[7663]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mngdrubxdxvcdreabkopcpfhhmjwrzfu ; /usr/bin/python3'
Nov 24 21:11:31 np0005534070.novalocal sudo[7663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:32 np0005534070.novalocal python3[7665]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:11:32 np0005534070.novalocal sudo[7663]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:32 np0005534070.novalocal sudo[7689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrkshcxgkobfzqittkzcrvymvvneklzg ; /usr/bin/python3'
Nov 24 21:11:32 np0005534070.novalocal sudo[7689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:32 np0005534070.novalocal python3[7691]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:11:32 np0005534070.novalocal sudo[7689]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:32 np0005534070.novalocal sudo[7767]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtetfrmyvhwawlvdmtbdvkpskjgqvhes ; /usr/bin/python3'
Nov 24 21:11:32 np0005534070.novalocal sudo[7767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:33 np0005534070.novalocal python3[7769]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:11:33 np0005534070.novalocal sudo[7767]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:33 np0005534070.novalocal sudo[7840]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkkaxhmkunfngqnaazcbbnuybtizrcwp ; /usr/bin/python3'
Nov 24 21:11:33 np0005534070.novalocal sudo[7840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:33 np0005534070.novalocal python3[7842]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764018692.7991354-489-159082734490149/source _original_basename=tmpwydqdbsq follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:11:33 np0005534070.novalocal sudo[7840]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:34 np0005534070.novalocal sudo[7890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhcucndyrdodvphprgtdyidziuogazml ; /usr/bin/python3'
Nov 24 21:11:34 np0005534070.novalocal sudo[7890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:34 np0005534070.novalocal python3[7892]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:11:34 np0005534070.novalocal systemd[1]: Reloading.
Nov 24 21:11:34 np0005534070.novalocal systemd-rc-local-generator[7911]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:11:34 np0005534070.novalocal sudo[7890]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:35 np0005534070.novalocal sudo[7946]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roadrhjxakqqooqpfoicznxewwlqlglq ; /usr/bin/python3'
Nov 24 21:11:35 np0005534070.novalocal sudo[7946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:36 np0005534070.novalocal python3[7948]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 24 21:11:36 np0005534070.novalocal sudo[7946]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:36 np0005534070.novalocal sudo[7972]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsbfufvfvgpxkymxpqidaofrsuflgyjx ; /usr/bin/python3'
Nov 24 21:11:36 np0005534070.novalocal sudo[7972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:36 np0005534070.novalocal python3[7974]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:11:36 np0005534070.novalocal sudo[7972]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:36 np0005534070.novalocal sudo[8000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybmrakbxkupqlmmlhlomymwcrpevaiiv ; /usr/bin/python3'
Nov 24 21:11:36 np0005534070.novalocal sudo[8000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:36 np0005534070.novalocal python3[8002]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:11:36 np0005534070.novalocal sudo[8000]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:36 np0005534070.novalocal sudo[8028]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cazihpnnfcpmckgcfvvvfuvjhhclxhpr ; /usr/bin/python3'
Nov 24 21:11:36 np0005534070.novalocal sudo[8028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:37 np0005534070.novalocal python3[8030]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:11:37 np0005534070.novalocal sudo[8028]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:37 np0005534070.novalocal sudo[8056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmpdczzkjeslcyzyohhihgajwacoqdqa ; /usr/bin/python3'
Nov 24 21:11:37 np0005534070.novalocal sudo[8056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:37 np0005534070.novalocal python3[8058]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:11:37 np0005534070.novalocal sudo[8056]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:37 np0005534070.novalocal python3[8085]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-f545-6449-000000001ce9-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:11:38 np0005534070.novalocal python3[8115]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 21:11:40 np0005534070.novalocal sshd-session[7532]: Connection closed by 38.102.83.114 port 36010
Nov 24 21:11:40 np0005534070.novalocal sshd-session[7529]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:11:40 np0005534070.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 24 21:11:40 np0005534070.novalocal systemd[1]: session-4.scope: Consumed 4.692s CPU time.
Nov 24 21:11:40 np0005534070.novalocal systemd-logind[806]: Session 4 logged out. Waiting for processes to exit.
Nov 24 21:11:40 np0005534070.novalocal systemd-logind[806]: Removed session 4.
Nov 24 21:11:41 np0005534070.novalocal sshd-session[8119]: Accepted publickey for zuul from 38.102.83.114 port 60910 ssh2: RSA SHA256:s+FA6JzLhmwu0B38HrAw2rBq4K/LlOWNM3GY6zwZaMs
Nov 24 21:11:41 np0005534070.novalocal systemd-logind[806]: New session 5 of user zuul.
Nov 24 21:11:41 np0005534070.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 24 21:11:41 np0005534070.novalocal sshd-session[8119]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:11:41 np0005534070.novalocal sudo[8146]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kspqywmcnyubgkgdcvtfcpsnphyjhjpv ; /usr/bin/python3'
Nov 24 21:11:41 np0005534070.novalocal sudo[8146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:11:42 np0005534070.novalocal python3[8148]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 21:12:01 np0005534070.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 21:12:01 np0005534070.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 21:12:01 np0005534070.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 21:12:01 np0005534070.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 21:12:01 np0005534070.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 21:12:01 np0005534070.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 21:12:01 np0005534070.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 21:12:01 np0005534070.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 21:12:09 np0005534070.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 21:12:09 np0005534070.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 21:12:09 np0005534070.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 21:12:09 np0005534070.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 21:12:09 np0005534070.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 21:12:09 np0005534070.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 21:12:09 np0005534070.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 21:12:09 np0005534070.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 21:12:19 np0005534070.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 21:12:19 np0005534070.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 21:12:19 np0005534070.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 21:12:19 np0005534070.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 21:12:19 np0005534070.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 21:12:19 np0005534070.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 21:12:19 np0005534070.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 21:12:19 np0005534070.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 21:12:20 np0005534070.novalocal setsebool[8210]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 24 21:12:20 np0005534070.novalocal setsebool[8210]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 24 21:12:21 np0005534070.novalocal sshd-session[8212]: Invalid user leo from 190.129.114.222 port 58988
Nov 24 21:12:21 np0005534070.novalocal sshd-session[8212]: Received disconnect from 190.129.114.222 port 58988:11: Bye Bye [preauth]
Nov 24 21:12:21 np0005534070.novalocal sshd-session[8212]: Disconnected from invalid user leo 190.129.114.222 port 58988 [preauth]
Nov 24 21:12:31 np0005534070.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 24 21:12:31 np0005534070.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 21:12:31 np0005534070.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 21:12:31 np0005534070.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 21:12:31 np0005534070.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 21:12:31 np0005534070.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 21:12:31 np0005534070.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 21:12:31 np0005534070.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 21:12:34 np0005534070.novalocal sshd-session[8927]: Invalid user banx from 45.148.10.240 port 42302
Nov 24 21:12:34 np0005534070.novalocal sshd-session[8927]: Connection closed by invalid user banx 45.148.10.240 port 42302 [preauth]
Nov 24 21:12:37 np0005534070.novalocal sshd-session[8224]: Connection closed by 115.190.136.219 port 43120 [preauth]
Nov 24 21:12:51 np0005534070.novalocal dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 21:12:51 np0005534070.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 21:12:51 np0005534070.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 24 21:12:51 np0005534070.novalocal systemd[1]: Reloading.
Nov 24 21:12:51 np0005534070.novalocal systemd-rc-local-generator[8972]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:12:51 np0005534070.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 21:12:54 np0005534070.novalocal sudo[8146]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:56 np0005534070.novalocal python3[11717]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-9f0b-507b-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:12:57 np0005534070.novalocal kernel: evm: overlay not supported
Nov 24 21:12:57 np0005534070.novalocal systemd[4303]: Starting D-Bus User Message Bus...
Nov 24 21:12:57 np0005534070.novalocal dbus-broker-launch[12436]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 24 21:12:57 np0005534070.novalocal dbus-broker-launch[12436]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 24 21:12:57 np0005534070.novalocal systemd[4303]: Started D-Bus User Message Bus.
Nov 24 21:12:57 np0005534070.novalocal dbus-broker-lau[12436]: Ready
Nov 24 21:12:57 np0005534070.novalocal systemd[4303]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 21:12:57 np0005534070.novalocal systemd[4303]: Created slice Slice /user.
Nov 24 21:12:57 np0005534070.novalocal systemd[4303]: podman-12321.scope: unit configures an IP firewall, but not running as root.
Nov 24 21:12:57 np0005534070.novalocal systemd[4303]: (This warning is only shown for the first unit using IP firewalling.)
Nov 24 21:12:57 np0005534070.novalocal systemd[4303]: Started podman-12321.scope.
Nov 24 21:12:57 np0005534070.novalocal systemd[4303]: Started podman-pause-1a28114f.scope.
Nov 24 21:12:57 np0005534070.novalocal sudo[13014]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqjcagonjlvlkvahzmkihrwlvfgpqvhe ; /usr/bin/python3'
Nov 24 21:12:57 np0005534070.novalocal sudo[13014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:12:58 np0005534070.novalocal python3[13043]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.143:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.143:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:12:58 np0005534070.novalocal python3[13043]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 24 21:12:58 np0005534070.novalocal sudo[13014]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:58 np0005534070.novalocal sshd-session[8122]: Connection closed by 38.102.83.114 port 60910
Nov 24 21:12:58 np0005534070.novalocal sshd-session[8119]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:12:58 np0005534070.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Nov 24 21:12:58 np0005534070.novalocal systemd[1]: session-5.scope: Consumed 1min 1.610s CPU time.
Nov 24 21:12:58 np0005534070.novalocal systemd-logind[806]: Session 5 logged out. Waiting for processes to exit.
Nov 24 21:12:58 np0005534070.novalocal systemd-logind[806]: Removed session 5.
Nov 24 21:13:02 np0005534070.novalocal sshd-session[14131]: Received disconnect from 116.71.136.125 port 48246:11: Bye Bye [preauth]
Nov 24 21:13:02 np0005534070.novalocal sshd-session[14131]: Disconnected from authenticating user root 116.71.136.125 port 48246 [preauth]
Nov 24 21:13:03 np0005534070.novalocal sshd[1010]: Timeout before authentication for connection from 120.211.145.102 to 38.102.83.66, pid = 7523
Nov 24 21:13:16 np0005534070.novalocal sshd-session[19403]: Connection closed by 38.102.83.200 port 51084 [preauth]
Nov 24 21:13:16 np0005534070.novalocal sshd-session[19406]: Unable to negotiate with 38.102.83.200 port 51122: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 24 21:13:16 np0005534070.novalocal sshd-session[19408]: Connection closed by 38.102.83.200 port 51098 [preauth]
Nov 24 21:13:16 np0005534070.novalocal sshd-session[19409]: Unable to negotiate with 38.102.83.200 port 51100: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 24 21:13:16 np0005534070.novalocal sshd-session[19411]: Unable to negotiate with 38.102.83.200 port 51116: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 24 21:13:19 np0005534070.novalocal irqbalance[801]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 24 21:13:19 np0005534070.novalocal irqbalance[801]: IRQ 27 affinity is now unmanaged
Nov 24 21:13:20 np0005534070.novalocal sshd-session[20686]: Accepted publickey for zuul from 38.102.83.114 port 50928 ssh2: RSA SHA256:s+FA6JzLhmwu0B38HrAw2rBq4K/LlOWNM3GY6zwZaMs
Nov 24 21:13:20 np0005534070.novalocal systemd-logind[806]: New session 6 of user zuul.
Nov 24 21:13:20 np0005534070.novalocal systemd[1]: Started Session 6 of User zuul.
Nov 24 21:13:20 np0005534070.novalocal sshd-session[20686]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:13:21 np0005534070.novalocal python3[20789]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ8hq4ll5ADjsHm8NwV6Iw+tSAtq5Pl8DtWtd23VN+RWEK08x9mfBm4UQL+8FZiYgZysOEBxQwrmwXGafX5Tee8= zuul@np0005534069.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:13:21 np0005534070.novalocal sudo[20981]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgswwwgajsecqfoivcqwpjtduatcyvvw ; /usr/bin/python3'
Nov 24 21:13:21 np0005534070.novalocal sudo[20981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:13:21 np0005534070.novalocal python3[20991]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ8hq4ll5ADjsHm8NwV6Iw+tSAtq5Pl8DtWtd23VN+RWEK08x9mfBm4UQL+8FZiYgZysOEBxQwrmwXGafX5Tee8= zuul@np0005534069.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:13:21 np0005534070.novalocal sudo[20981]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:22 np0005534070.novalocal sudo[21292]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqbsegzkzfkuhvkuzpwtfbehuzysvcbg ; /usr/bin/python3'
Nov 24 21:13:22 np0005534070.novalocal sudo[21292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:13:22 np0005534070.novalocal python3[21302]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005534070.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 24 21:13:22 np0005534070.novalocal useradd[21361]: new group: name=cloud-admin, GID=1002
Nov 24 21:13:22 np0005534070.novalocal useradd[21361]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 24 21:13:22 np0005534070.novalocal sudo[21292]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:22 np0005534070.novalocal sudo[21484]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-embhopisbhkttnelmwfmwxbzxqufsowo ; /usr/bin/python3'
Nov 24 21:13:22 np0005534070.novalocal sudo[21484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:13:23 np0005534070.novalocal python3[21496]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ8hq4ll5ADjsHm8NwV6Iw+tSAtq5Pl8DtWtd23VN+RWEK08x9mfBm4UQL+8FZiYgZysOEBxQwrmwXGafX5Tee8= zuul@np0005534069.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 21:13:23 np0005534070.novalocal sudo[21484]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:23 np0005534070.novalocal sudo[21753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiarelqfmfatrvtliypwodrzkdugaedg ; /usr/bin/python3'
Nov 24 21:13:23 np0005534070.novalocal sudo[21753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:13:23 np0005534070.novalocal python3[21763]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:13:23 np0005534070.novalocal sudo[21753]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:23 np0005534070.novalocal sudo[21995]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsvfsincrjaymomoxaacxlyzkyylawfw ; /usr/bin/python3'
Nov 24 21:13:23 np0005534070.novalocal sudo[21995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:13:24 np0005534070.novalocal python3[22004]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764018803.1727238-135-266358143142780/source _original_basename=tmpg_0m2szn follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:13:24 np0005534070.novalocal sudo[21995]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:24 np0005534070.novalocal sudo[22248]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyautoynrpwulnevdgawirjkjecigbtf ; /usr/bin/python3'
Nov 24 21:13:24 np0005534070.novalocal sudo[22248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:13:24 np0005534070.novalocal python3[22255]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 24 21:13:24 np0005534070.novalocal systemd[1]: Starting Hostname Service...
Nov 24 21:13:24 np0005534070.novalocal systemd[1]: Started Hostname Service.
Nov 24 21:13:24 np0005534070.novalocal systemd-hostnamed[22347]: Changed pretty hostname to 'compute-0'
Nov 24 21:13:24 compute-0 systemd-hostnamed[22347]: Hostname set to <compute-0> (static)
Nov 24 21:13:24 compute-0 NetworkManager[7199]: <info>  [1764018804.9908] hostname: static hostname changed from "np0005534070.novalocal" to "compute-0"
Nov 24 21:13:25 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 21:13:25 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 21:13:25 compute-0 sudo[22248]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:25 compute-0 sshd-session[20732]: Connection closed by 38.102.83.114 port 50928
Nov 24 21:13:25 compute-0 sshd-session[20686]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:13:25 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 24 21:13:25 compute-0 systemd[1]: session-6.scope: Consumed 2.647s CPU time.
Nov 24 21:13:25 compute-0 systemd-logind[806]: Session 6 logged out. Waiting for processes to exit.
Nov 24 21:13:25 compute-0 systemd-logind[806]: Removed session 6.
Nov 24 21:13:35 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 21:13:42 compute-0 sshd-session[27343]: Invalid user arkserver from 190.129.114.222 port 52648
Nov 24 21:13:42 compute-0 sshd-session[27343]: Received disconnect from 190.129.114.222 port 52648:11: Bye Bye [preauth]
Nov 24 21:13:42 compute-0 sshd-session[27343]: Disconnected from invalid user arkserver 190.129.114.222 port 52648 [preauth]
Nov 24 21:13:52 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 21:13:52 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 21:13:52 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 10.001s CPU time.
Nov 24 21:13:52 compute-0 systemd[1]: run-r10a2ef9394b54a5cb365d3fe370438e1.service: Deactivated successfully.
Nov 24 21:13:55 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 21:14:36 compute-0 sshd-session[29981]: Connection closed by authenticating user root 45.148.10.240 port 39326 [preauth]
Nov 24 21:14:41 compute-0 sshd-session[29983]: Received disconnect from 116.71.136.125 port 34906:11: Bye Bye [preauth]
Nov 24 21:14:41 compute-0 sshd-session[29983]: Disconnected from authenticating user root 116.71.136.125 port 34906 [preauth]
Nov 24 21:16:24 compute-0 sshd-session[29989]: Received disconnect from 116.71.136.125 port 46668:11: Bye Bye [preauth]
Nov 24 21:16:24 compute-0 sshd-session[29989]: Disconnected from authenticating user root 116.71.136.125 port 46668 [preauth]
Nov 24 21:16:36 compute-0 sshd[1010]: Timeout before authentication for connection from 115.190.136.219 to 38.102.83.66, pid = 29980
Nov 24 21:16:45 compute-0 sshd-session[29991]: Invalid user ethereum from 45.148.10.240 port 42916
Nov 24 21:16:45 compute-0 sshd-session[29991]: Connection closed by invalid user ethereum 45.148.10.240 port 42916 [preauth]
Nov 24 21:17:22 compute-0 sshd-session[29994]: Invalid user admin from 80.94.95.115 port 51606
Nov 24 21:17:23 compute-0 sshd-session[29994]: Connection closed by invalid user admin 80.94.95.115 port 51606 [preauth]
Nov 24 21:17:56 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 24 21:17:56 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 24 21:17:56 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 24 21:17:56 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 24 21:18:09 compute-0 sshd-session[30000]: Accepted publickey for zuul from 38.102.83.200 port 34580 ssh2: RSA SHA256:s+FA6JzLhmwu0B38HrAw2rBq4K/LlOWNM3GY6zwZaMs
Nov 24 21:18:09 compute-0 systemd-logind[806]: New session 7 of user zuul.
Nov 24 21:18:09 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 24 21:18:09 compute-0 sshd-session[30000]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:18:09 compute-0 python3[30076]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:18:11 compute-0 sudo[30190]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiivfgvmnsjvqbhzgimdjrlfluscravw ; /usr/bin/python3'
Nov 24 21:18:11 compute-0 sudo[30190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:11 compute-0 python3[30192]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:18:11 compute-0 sudo[30190]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:11 compute-0 sshd-session[29998]: Received disconnect from 116.71.136.125 port 57560:11: Bye Bye [preauth]
Nov 24 21:18:11 compute-0 sshd-session[29998]: Disconnected from authenticating user root 116.71.136.125 port 57560 [preauth]
Nov 24 21:18:11 compute-0 sudo[30263]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upynqstpjjvsidaeufcafynmisfrbquz ; /usr/bin/python3'
Nov 24 21:18:11 compute-0 sudo[30263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:11 compute-0 python3[30265]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764019091.0965314-33573-234380884135086/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:18:11 compute-0 sudo[30263]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:12 compute-0 sudo[30289]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dptunjauwzzsnqggxobutiwnmoovweew ; /usr/bin/python3'
Nov 24 21:18:12 compute-0 sudo[30289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:12 compute-0 python3[30291]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:18:12 compute-0 sudo[30289]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:12 compute-0 sudo[30362]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkpdxomiporamkhegcfrtusiwckwwxgt ; /usr/bin/python3'
Nov 24 21:18:12 compute-0 sudo[30362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:12 compute-0 python3[30364]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764019091.0965314-33573-234380884135086/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:18:12 compute-0 sudo[30362]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:12 compute-0 sudo[30388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-youhacubyqrlhrbslbmoixbmpirrtyqr ; /usr/bin/python3'
Nov 24 21:18:12 compute-0 sudo[30388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:12 compute-0 python3[30390]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:18:12 compute-0 sudo[30388]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:13 compute-0 sudo[30461]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tseweovqwmawakgnhnmcxiglawaffsla ; /usr/bin/python3'
Nov 24 21:18:13 compute-0 sudo[30461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:13 compute-0 python3[30463]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764019091.0965314-33573-234380884135086/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:18:13 compute-0 sudo[30461]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:13 compute-0 sudo[30487]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iclusaivagbakyhbvkxgtwskswiodhrz ; /usr/bin/python3'
Nov 24 21:18:13 compute-0 sudo[30487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:13 compute-0 python3[30489]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:18:13 compute-0 sudo[30487]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:14 compute-0 sudo[30560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzaspkrepbuczfqteibyoqewtpepxezk ; /usr/bin/python3'
Nov 24 21:18:14 compute-0 sudo[30560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:14 compute-0 python3[30562]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764019091.0965314-33573-234380884135086/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:18:14 compute-0 sudo[30560]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:14 compute-0 sudo[30586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axubacsvzcewntdbmnftaludppathgxq ; /usr/bin/python3'
Nov 24 21:18:14 compute-0 sudo[30586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:14 compute-0 python3[30588]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:18:14 compute-0 sudo[30586]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:14 compute-0 sudo[30659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlssejrktzmjjhtntieutmsunmsunbhs ; /usr/bin/python3'
Nov 24 21:18:14 compute-0 sudo[30659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:14 compute-0 python3[30661]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764019091.0965314-33573-234380884135086/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:18:14 compute-0 sudo[30659]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:15 compute-0 sudo[30685]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnahsjhvfdyggaglnrksuylitnsbfvbl ; /usr/bin/python3'
Nov 24 21:18:15 compute-0 sudo[30685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:15 compute-0 python3[30687]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:18:15 compute-0 sudo[30685]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:15 compute-0 sudo[30758]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfofewfqrnrggynxylamolcfzfnbriif ; /usr/bin/python3'
Nov 24 21:18:15 compute-0 sudo[30758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:15 compute-0 python3[30760]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764019091.0965314-33573-234380884135086/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:18:15 compute-0 sudo[30758]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:15 compute-0 sudo[30784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkmalffbytxrfuqshazlttkffnjjzrks ; /usr/bin/python3'
Nov 24 21:18:15 compute-0 sudo[30784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:15 compute-0 python3[30786]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 21:18:15 compute-0 sudo[30784]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:16 compute-0 sudo[30857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqlzmtrvclptovfslenoufwjnlodutgl ; /usr/bin/python3'
Nov 24 21:18:16 compute-0 sudo[30857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:18:16 compute-0 python3[30859]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764019091.0965314-33573-234380884135086/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:18:16 compute-0 sudo[30857]: pam_unix(sudo:session): session closed for user root
Nov 24 21:18:18 compute-0 sshd-session[30886]: Unable to negotiate with 192.168.122.11 port 50910: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 24 21:18:18 compute-0 sshd-session[30884]: Connection closed by 192.168.122.11 port 50886 [preauth]
Nov 24 21:18:18 compute-0 sshd-session[30885]: Connection closed by 192.168.122.11 port 50898 [preauth]
Nov 24 21:18:18 compute-0 sshd-session[30887]: Unable to negotiate with 192.168.122.11 port 50924: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 24 21:18:18 compute-0 sshd-session[30888]: Unable to negotiate with 192.168.122.11 port 50938: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 24 21:18:48 compute-0 systemd[1]: Starting dnf makecache...
Nov 24 21:18:48 compute-0 sshd-session[30894]: Invalid user eth from 45.148.10.240 port 49224
Nov 24 21:18:48 compute-0 sshd-session[30894]: Connection closed by invalid user eth 45.148.10.240 port 49224 [preauth]
Nov 24 21:18:48 compute-0 dnf[30896]: Failed determining last makecache time.
Nov 24 21:18:48 compute-0 dnf[30896]: delorean-openstack-barbican-42b4c41831408a8e323 359 kB/s |  13 kB     00:00
Nov 24 21:18:48 compute-0 dnf[30896]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 3.0 MB/s |  65 kB     00:00
Nov 24 21:18:48 compute-0 dnf[30896]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.4 MB/s |  32 kB     00:00
Nov 24 21:18:48 compute-0 dnf[30896]: delorean-python-stevedore-c4acc5639fd2329372142 4.5 MB/s | 131 kB     00:00
Nov 24 21:18:48 compute-0 dnf[30896]: delorean-python-observabilityclient-2f31846d73c 1.2 MB/s |  25 kB     00:00
Nov 24 21:18:48 compute-0 dnf[30896]: delorean-os-net-config-bbae2ed8a159b0435a473f38  16 MB/s | 356 kB     00:00
Nov 24 21:18:48 compute-0 dnf[30896]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 2.0 MB/s |  42 kB     00:00
Nov 24 21:18:48 compute-0 dnf[30896]: delorean-python-designate-tests-tempest-347fdbc 803 kB/s |  18 kB     00:00
Nov 24 21:18:48 compute-0 dnf[30896]: delorean-openstack-glance-1fd12c29b339f30fe823e 784 kB/s |  18 kB     00:00
Nov 24 21:18:48 compute-0 dnf[30896]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.5 MB/s |  29 kB     00:00
Nov 24 21:18:49 compute-0 dnf[30896]: delorean-openstack-manila-3c01b7181572c95dac462 1.2 MB/s |  25 kB     00:00
Nov 24 21:18:49 compute-0 dnf[30896]: delorean-python-whitebox-neutron-tests-tempest- 6.3 MB/s | 154 kB     00:00
Nov 24 21:18:49 compute-0 dnf[30896]: delorean-openstack-octavia-ba397f07a7331190208c 1.2 MB/s |  26 kB     00:00
Nov 24 21:18:49 compute-0 dnf[30896]: delorean-openstack-watcher-c014f81a8647287f6dcc 798 kB/s |  16 kB     00:00
Nov 24 21:18:49 compute-0 dnf[30896]: delorean-python-tcib-1124124ec06aadbac34f0d340b 347 kB/s | 7.4 kB     00:00
Nov 24 21:18:49 compute-0 dnf[30896]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 6.2 MB/s | 144 kB     00:00
Nov 24 21:18:49 compute-0 dnf[30896]: delorean-openstack-swift-dc98a8463506ac520c469a 633 kB/s |  14 kB     00:00
Nov 24 21:18:49 compute-0 dnf[30896]: delorean-python-tempestconf-8515371b7cceebd4282 2.3 MB/s |  53 kB     00:00
Nov 24 21:18:49 compute-0 dnf[30896]: delorean-openstack-heat-ui-013accbfd179753bc3f0 4.6 MB/s |  96 kB     00:00
Nov 24 21:18:49 compute-0 dnf[30896]: CentOS Stream 9 - BaseOS                         71 kB/s | 7.3 kB     00:00
Nov 24 21:18:49 compute-0 dnf[30896]: CentOS Stream 9 - AppStream                      80 kB/s | 7.4 kB     00:00
Nov 24 21:18:49 compute-0 dnf[30896]: CentOS Stream 9 - CRB                            47 kB/s | 7.2 kB     00:00
Nov 24 21:18:50 compute-0 dnf[30896]: CentOS Stream 9 - Extras packages                78 kB/s | 8.3 kB     00:00
Nov 24 21:18:50 compute-0 dnf[30896]: dlrn-antelope-testing                            28 MB/s | 1.1 MB     00:00
Nov 24 21:18:50 compute-0 dnf[30896]: dlrn-antelope-build-deps                         15 MB/s | 461 kB     00:00
Nov 24 21:18:50 compute-0 dnf[30896]: centos9-rabbitmq                                9.5 MB/s | 123 kB     00:00
Nov 24 21:18:50 compute-0 dnf[30896]: centos9-storage                                  28 MB/s | 415 kB     00:00
Nov 24 21:18:50 compute-0 dnf[30896]: centos9-opstools                                3.8 MB/s |  51 kB     00:00
Nov 24 21:18:51 compute-0 dnf[30896]: NFV SIG OpenvSwitch                              28 MB/s | 454 kB     00:00
Nov 24 21:18:51 compute-0 dnf[30896]: repo-setup-centos-appstream                     100 MB/s |  25 MB     00:00
Nov 24 21:18:54 compute-0 sshd-session[30956]: Connection closed by 115.190.136.219 port 59184 [preauth]
Nov 24 21:18:57 compute-0 dnf[30896]: repo-setup-centos-baseos                         81 MB/s | 8.8 MB     00:00
Nov 24 21:18:58 compute-0 dnf[30896]: repo-setup-centos-highavailability               30 MB/s | 744 kB     00:00
Nov 24 21:18:58 compute-0 dnf[30896]: repo-setup-centos-powertools                     75 MB/s | 7.3 MB     00:00
Nov 24 21:19:01 compute-0 dnf[30896]: Extra Packages for Enterprise Linux 9 - x86_64   14 MB/s |  20 MB     00:01
Nov 24 21:19:14 compute-0 dnf[30896]: Metadata cache created.
Nov 24 21:19:14 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 24 21:19:14 compute-0 systemd[1]: Finished dnf makecache.
Nov 24 21:19:14 compute-0 systemd[1]: dnf-makecache.service: Consumed 23.375s CPU time.
Nov 24 21:20:49 compute-0 sshd-session[31002]: Invalid user solv from 45.148.10.240 port 40300
Nov 24 21:20:49 compute-0 sshd-session[31002]: Connection closed by invalid user solv 45.148.10.240 port 40300 [preauth]
Nov 24 21:20:58 compute-0 sshd-session[31004]: Connection closed by 115.190.136.219 port 54850 [preauth]
Nov 24 21:21:01 compute-0 python3[31029]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:22:59 compute-0 sshd-session[31034]: Invalid user ubuntu from 45.148.10.240 port 41462
Nov 24 21:22:59 compute-0 sshd-session[31034]: Connection closed by invalid user ubuntu 45.148.10.240 port 41462 [preauth]
Nov 24 21:24:59 compute-0 sshd[1010]: Timeout before authentication for connection from 115.190.136.219 to 38.102.83.66, pid = 31032
Nov 24 21:25:03 compute-0 sshd-session[31038]: Invalid user ubuntu from 45.148.10.240 port 39988
Nov 24 21:25:03 compute-0 sshd-session[31038]: Connection closed by invalid user ubuntu 45.148.10.240 port 39988 [preauth]
Nov 24 21:25:33 compute-0 sshd-session[31040]: Received disconnect from 193.46.255.244 port 44518:11:  [preauth]
Nov 24 21:25:33 compute-0 sshd-session[31040]: Disconnected from authenticating user root 193.46.255.244 port 44518 [preauth]
Nov 24 21:26:00 compute-0 sshd-session[30003]: Received disconnect from 38.102.83.200 port 34580:11: disconnected by user
Nov 24 21:26:00 compute-0 sshd-session[30003]: Disconnected from user zuul 38.102.83.200 port 34580
Nov 24 21:26:00 compute-0 sshd-session[30000]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:26:00 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 24 21:26:00 compute-0 systemd[1]: session-7.scope: Consumed 5.890s CPU time.
Nov 24 21:26:00 compute-0 systemd-logind[806]: Session 7 logged out. Waiting for processes to exit.
Nov 24 21:26:00 compute-0 systemd-logind[806]: Removed session 7.
Nov 24 21:27:07 compute-0 sshd-session[31044]: Invalid user ubuntu from 45.148.10.240 port 48964
Nov 24 21:27:07 compute-0 sshd-session[31044]: Connection closed by invalid user ubuntu 45.148.10.240 port 48964 [preauth]
Nov 24 21:27:34 compute-0 sshd-session[31046]: Invalid user test from 80.94.95.116 port 15792
Nov 24 21:27:34 compute-0 sshd-session[31046]: Connection closed by invalid user test 80.94.95.116 port 15792 [preauth]
Nov 24 21:29:07 compute-0 sshd[1010]: Timeout before authentication for connection from 115.190.136.219 to 38.102.83.66, pid = 31043
Nov 24 21:29:09 compute-0 sshd[1010]: drop connection #0 from [115.190.136.219]:40350 on [38.102.83.66]:22 penalty: exceeded LoginGraceTime
Nov 24 21:29:17 compute-0 sshd-session[31048]: Invalid user ubuntu from 45.148.10.240 port 35980
Nov 24 21:29:17 compute-0 sshd-session[31048]: Connection closed by invalid user ubuntu 45.148.10.240 port 35980 [preauth]
Nov 24 21:31:23 compute-0 sshd-session[31052]: Invalid user ubuntu from 45.148.10.240 port 53884
Nov 24 21:31:23 compute-0 sshd-session[31052]: Connection closed by invalid user ubuntu 45.148.10.240 port 53884 [preauth]
Nov 24 21:33:26 compute-0 sshd-session[31055]: Invalid user ubuntu from 45.148.10.240 port 38828
Nov 24 21:33:26 compute-0 sshd-session[31055]: Connection closed by invalid user ubuntu 45.148.10.240 port 38828 [preauth]
Nov 24 21:33:46 compute-0 sshd-session[31057]: Accepted publickey for zuul from 192.168.122.30 port 42504 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:33:46 compute-0 systemd-logind[806]: New session 8 of user zuul.
Nov 24 21:33:46 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 24 21:33:46 compute-0 sshd-session[31057]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:33:47 compute-0 python3.9[31210]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:33:49 compute-0 sudo[31389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqrcklunbytzswlfhlnquomfxievtrsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020028.465442-32-271886429811805/AnsiballZ_command.py'
Nov 24 21:33:49 compute-0 sudo[31389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:33:49 compute-0 python3.9[31391]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:33:56 compute-0 sudo[31389]: pam_unix(sudo:session): session closed for user root
Nov 24 21:33:57 compute-0 sshd-session[31060]: Connection closed by 192.168.122.30 port 42504
Nov 24 21:33:57 compute-0 sshd-session[31057]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:33:57 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 24 21:33:57 compute-0 systemd[1]: session-8.scope: Consumed 8.307s CPU time.
Nov 24 21:33:57 compute-0 systemd-logind[806]: Session 8 logged out. Waiting for processes to exit.
Nov 24 21:33:57 compute-0 systemd-logind[806]: Removed session 8.
Nov 24 21:34:04 compute-0 sshd-session[31449]: Accepted publickey for zuul from 192.168.122.30 port 48122 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:34:04 compute-0 systemd-logind[806]: New session 9 of user zuul.
Nov 24 21:34:04 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 24 21:34:04 compute-0 sshd-session[31449]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:34:05 compute-0 python3.9[31602]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:34:06 compute-0 sshd-session[31452]: Connection closed by 192.168.122.30 port 48122
Nov 24 21:34:06 compute-0 sshd-session[31449]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:34:06 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 24 21:34:06 compute-0 systemd-logind[806]: Session 9 logged out. Waiting for processes to exit.
Nov 24 21:34:06 compute-0 systemd-logind[806]: Removed session 9.
Nov 24 21:34:21 compute-0 sshd-session[31631]: Accepted publickey for zuul from 192.168.122.30 port 43506 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:34:21 compute-0 systemd-logind[806]: New session 10 of user zuul.
Nov 24 21:34:21 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 24 21:34:21 compute-0 sshd-session[31631]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:34:22 compute-0 python3.9[31784]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 21:34:24 compute-0 python3.9[31958]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:34:24 compute-0 sudo[32108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rocjzdfiueuarcznwndkelrqavzweeop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020064.3421092-45-242343401238587/AnsiballZ_command.py'
Nov 24 21:34:24 compute-0 sudo[32108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:34:25 compute-0 python3.9[32110]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:34:25 compute-0 sudo[32108]: pam_unix(sudo:session): session closed for user root
Nov 24 21:34:25 compute-0 sudo[32261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmxudzuqqlvgtbisklskrbfrlpssmgkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020065.4363313-57-82044591202225/AnsiballZ_stat.py'
Nov 24 21:34:25 compute-0 sudo[32261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:34:26 compute-0 python3.9[32263]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:34:26 compute-0 sudo[32261]: pam_unix(sudo:session): session closed for user root
Nov 24 21:34:26 compute-0 sudo[32413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbyjvxtvhkclapokudxysbkxbmazzfxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020066.3935378-65-138131188784221/AnsiballZ_file.py'
Nov 24 21:34:26 compute-0 sudo[32413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:34:27 compute-0 python3.9[32415]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:34:27 compute-0 sudo[32413]: pam_unix(sudo:session): session closed for user root
Nov 24 21:34:27 compute-0 sudo[32565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdaxozobofxxhdydnpqmuhrzjclwklfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020067.3679287-73-26471056610811/AnsiballZ_stat.py'
Nov 24 21:34:27 compute-0 sudo[32565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:34:27 compute-0 python3.9[32567]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:34:27 compute-0 sudo[32565]: pam_unix(sudo:session): session closed for user root
Nov 24 21:34:28 compute-0 sudo[32688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvuneqztjcckqesearglilhcbndjmyqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020067.3679287-73-26471056610811/AnsiballZ_copy.py'
Nov 24 21:34:28 compute-0 sudo[32688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:34:28 compute-0 python3.9[32690]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764020067.3679287-73-26471056610811/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:34:28 compute-0 sudo[32688]: pam_unix(sudo:session): session closed for user root
Nov 24 21:34:29 compute-0 sudo[32840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftdjmmjjzrkygfuexqhsgcxwdpvwfncn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020068.9535213-88-22297334998499/AnsiballZ_setup.py'
Nov 24 21:34:29 compute-0 sudo[32840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:34:29 compute-0 python3.9[32842]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:34:29 compute-0 sudo[32840]: pam_unix(sudo:session): session closed for user root
Nov 24 21:34:30 compute-0 sudo[32996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twhubleyhztoydsulidolcqcnfdeqyzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020070.0562298-96-260216706006877/AnsiballZ_file.py'
Nov 24 21:34:30 compute-0 sudo[32996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:34:30 compute-0 python3.9[32998]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:34:30 compute-0 sudo[32996]: pam_unix(sudo:session): session closed for user root
Nov 24 21:34:31 compute-0 sudo[33148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnyyugbjsicrxrhuyapvfvjuvuuxziey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020070.905984-105-46433836409248/AnsiballZ_file.py'
Nov 24 21:34:31 compute-0 sudo[33148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:34:31 compute-0 python3.9[33150]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:34:31 compute-0 sudo[33148]: pam_unix(sudo:session): session closed for user root
Nov 24 21:34:32 compute-0 python3.9[33300]: ansible-ansible.builtin.service_facts Invoked
Nov 24 21:34:37 compute-0 python3.9[33553]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:34:38 compute-0 python3.9[33703]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:34:39 compute-0 python3.9[33857]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:34:39 compute-0 sudo[34013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntgeooctwykazoysnhjwlmuzomkoovsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020079.6129208-153-247391556326319/AnsiballZ_setup.py'
Nov 24 21:34:39 compute-0 sudo[34013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:34:40 compute-0 python3.9[34015]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:34:40 compute-0 sudo[34013]: pam_unix(sudo:session): session closed for user root
Nov 24 21:34:41 compute-0 sudo[34097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heektebpgjnkeyzcfrfqkrzacesflmtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020079.6129208-153-247391556326319/AnsiballZ_dnf.py'
Nov 24 21:34:41 compute-0 sudo[34097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:34:41 compute-0 python3.9[34099]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:35:21 compute-0 sshd[1010]: Timeout before authentication for connection from 115.190.136.219 to 38.102.83.66, pid = 31054
Nov 24 21:35:23 compute-0 systemd[1]: Reloading.
Nov 24 21:35:23 compute-0 systemd-rc-local-generator[34293]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:35:23 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 24 21:35:24 compute-0 systemd[1]: Reloading.
Nov 24 21:35:24 compute-0 systemd-rc-local-generator[34337]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:35:24 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 24 21:35:25 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 24 21:35:25 compute-0 systemd[1]: Reloading.
Nov 24 21:35:25 compute-0 systemd-rc-local-generator[34373]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:35:25 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 24 21:35:25 compute-0 dbus-broker-launch[779]: Noticed file-system modification, trigger reload.
Nov 24 21:35:25 compute-0 dbus-broker-launch[779]: Noticed file-system modification, trigger reload.
Nov 24 21:35:25 compute-0 dbus-broker-launch[779]: Noticed file-system modification, trigger reload.
Nov 24 21:35:34 compute-0 sshd-session[34422]: Invalid user ubuntu from 45.148.10.240 port 54204
Nov 24 21:35:34 compute-0 sshd-session[34422]: Connection closed by invalid user ubuntu 45.148.10.240 port 54204 [preauth]
Nov 24 21:36:28 compute-0 kernel: SELinux:  Converting 2719 SID table entries...
Nov 24 21:36:28 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 21:36:28 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 21:36:28 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 21:36:28 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 21:36:28 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 21:36:28 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 21:36:28 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 21:36:28 compute-0 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 24 21:36:28 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 21:36:28 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 21:36:29 compute-0 systemd[1]: Reloading.
Nov 24 21:36:29 compute-0 systemd-rc-local-generator[34712]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:36:29 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 21:36:29 compute-0 sudo[34097]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:30 compute-0 sudo[35617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbhbzelqnjkotltdltvwpqupcoejznzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020189.8492084-165-6810882215514/AnsiballZ_command.py'
Nov 24 21:36:30 compute-0 sudo[35617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:30 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 21:36:30 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 21:36:30 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.320s CPU time.
Nov 24 21:36:30 compute-0 systemd[1]: run-rd8f2ae41b5d542d186ab8aeed28f657d.service: Deactivated successfully.
Nov 24 21:36:30 compute-0 python3.9[35620]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:36:31 compute-0 sudo[35617]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:32 compute-0 sudo[35899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfburhttzyjeioidyeutbqratanjwces ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020191.7691555-173-280130516906224/AnsiballZ_selinux.py'
Nov 24 21:36:32 compute-0 sudo[35899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:32 compute-0 python3.9[35901]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 21:36:32 compute-0 sudo[35899]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:33 compute-0 sudo[36051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdnxrsjpceuqelfunxicifprapynuejw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020193.2512667-184-216419751321732/AnsiballZ_command.py'
Nov 24 21:36:33 compute-0 sudo[36051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:33 compute-0 python3.9[36053]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 21:36:34 compute-0 sudo[36051]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:35 compute-0 sudo[36204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqeailfgflnevffnnseazbazrzocjujp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020195.2132773-192-231387675982752/AnsiballZ_file.py'
Nov 24 21:36:35 compute-0 sudo[36204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:35 compute-0 python3.9[36206]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:36:35 compute-0 sudo[36204]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:36 compute-0 sudo[36356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qialstqzwfkdjhubxmpyrhzeccvmaycb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020196.1845899-200-156394039149490/AnsiballZ_mount.py'
Nov 24 21:36:36 compute-0 sudo[36356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:37 compute-0 python3.9[36358]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 21:36:37 compute-0 sudo[36356]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:38 compute-0 sudo[36508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blzcndbtbzzjxjmtekvfgvgkfrkieost ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020198.2969203-228-96922212519366/AnsiballZ_file.py'
Nov 24 21:36:38 compute-0 sudo[36508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:38 compute-0 python3.9[36510]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:36:38 compute-0 sudo[36508]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:39 compute-0 sudo[36660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycqstqkclabvwxxvsgdoykeuxnfilcvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020199.1225-236-213447506650379/AnsiballZ_stat.py'
Nov 24 21:36:39 compute-0 sudo[36660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:39 compute-0 python3.9[36662]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:36:39 compute-0 sudo[36660]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:40 compute-0 sudo[36783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exseylxxrkmwcfecwwdvrhbpyimcihda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020199.1225-236-213447506650379/AnsiballZ_copy.py'
Nov 24 21:36:40 compute-0 sudo[36783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:42 compute-0 python3.9[36785]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020199.1225-236-213447506650379/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8d7d504665527e13e822e08f71c705a5ecc3040d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:36:42 compute-0 sudo[36783]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:43 compute-0 sudo[36935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjkaswvrugixjpqhiiwygebbwudbizip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020203.5188737-260-13068940436329/AnsiballZ_stat.py'
Nov 24 21:36:43 compute-0 sudo[36935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:44 compute-0 python3.9[36937]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:36:44 compute-0 sudo[36935]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:44 compute-0 sudo[37087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kanvpmrnhixlocdtymemrjjusqzoelhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020204.2371187-268-144469146800412/AnsiballZ_command.py'
Nov 24 21:36:44 compute-0 sudo[37087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:44 compute-0 python3.9[37089]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:36:44 compute-0 sudo[37087]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:45 compute-0 sudo[37240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdzujukoiunniblblqgvohkvzuydjjal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020205.0765872-276-178797371859683/AnsiballZ_file.py'
Nov 24 21:36:45 compute-0 sudo[37240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:45 compute-0 python3.9[37242]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:36:45 compute-0 sudo[37240]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:46 compute-0 sudo[37392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfwvuwqycpbtqqbqvraydzhbpyepbchc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020206.1499906-287-99471962554801/AnsiballZ_getent.py'
Nov 24 21:36:46 compute-0 sudo[37392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:46 compute-0 python3.9[37394]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 21:36:46 compute-0 sudo[37392]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:46 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:36:47 compute-0 sudo[37546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipwrfxyccywuscdiebtrymazsznzebtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020207.1128206-295-155934511656331/AnsiballZ_group.py'
Nov 24 21:36:47 compute-0 sudo[37546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:47 compute-0 python3.9[37548]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 21:36:47 compute-0 groupadd[37549]: group added to /etc/group: name=qemu, GID=107
Nov 24 21:36:47 compute-0 groupadd[37549]: group added to /etc/gshadow: name=qemu
Nov 24 21:36:47 compute-0 groupadd[37549]: new group: name=qemu, GID=107
Nov 24 21:36:47 compute-0 sudo[37546]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:48 compute-0 sudo[37704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpddieqlvbinadqixljcfnzwzmlaqfnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020208.2210274-303-81951344295523/AnsiballZ_user.py'
Nov 24 21:36:48 compute-0 sudo[37704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:49 compute-0 python3.9[37706]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 21:36:49 compute-0 useradd[37708]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 21:36:49 compute-0 sudo[37704]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:49 compute-0 sudo[37864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzaaxgqsimocrbomiufvrebofatimmrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020209.3791897-311-141571752106664/AnsiballZ_getent.py'
Nov 24 21:36:49 compute-0 sudo[37864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:49 compute-0 python3.9[37866]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 21:36:49 compute-0 sudo[37864]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:50 compute-0 sudo[38017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcbpykxwsqgofsfzzqvzfsskkqlctcay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020210.1910534-319-210860889278231/AnsiballZ_group.py'
Nov 24 21:36:50 compute-0 sudo[38017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:50 compute-0 python3.9[38019]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 21:36:50 compute-0 groupadd[38020]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 24 21:36:50 compute-0 groupadd[38020]: group added to /etc/gshadow: name=hugetlbfs
Nov 24 21:36:50 compute-0 groupadd[38020]: new group: name=hugetlbfs, GID=42477
Nov 24 21:36:50 compute-0 sudo[38017]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:51 compute-0 sudo[38175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnubluobehkcnfqaiozvhsqubimmftop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020211.0697646-328-68380459397696/AnsiballZ_file.py'
Nov 24 21:36:51 compute-0 sudo[38175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:51 compute-0 python3.9[38177]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 21:36:51 compute-0 sudo[38175]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:52 compute-0 sudo[38327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyaubzjpqcrteqvibykakhxsmclrcizc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020212.0159588-339-54430162685704/AnsiballZ_dnf.py'
Nov 24 21:36:52 compute-0 sudo[38327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:52 compute-0 python3.9[38329]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:36:54 compute-0 sudo[38327]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:54 compute-0 sudo[38480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wljunkwtogrzgvlydgdfkaanqxrbahuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020214.3223367-347-23249554986621/AnsiballZ_file.py'
Nov 24 21:36:54 compute-0 sudo[38480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:55 compute-0 python3.9[38482]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:36:55 compute-0 sudo[38480]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:55 compute-0 sudo[38632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgwassoirpkotckymhrbknhvuoxuxxoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020215.2857747-355-265501434361556/AnsiballZ_stat.py'
Nov 24 21:36:55 compute-0 sudo[38632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:55 compute-0 python3.9[38634]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:36:55 compute-0 sudo[38632]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:56 compute-0 sudo[38755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psynrwscagrjzgutxteehjkxjmotqmzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020215.2857747-355-265501434361556/AnsiballZ_copy.py'
Nov 24 21:36:56 compute-0 sudo[38755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:56 compute-0 python3.9[38757]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764020215.2857747-355-265501434361556/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:36:56 compute-0 sudo[38755]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:57 compute-0 sudo[38907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxflimhtulkyiuwzyzwxailrdznkijac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020216.6705205-370-236101073181225/AnsiballZ_systemd.py'
Nov 24 21:36:57 compute-0 sudo[38907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:57 compute-0 python3.9[38909]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:36:57 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 24 21:36:57 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 24 21:36:57 compute-0 kernel: Bridge firewalling registered
Nov 24 21:36:57 compute-0 systemd-modules-load[38913]: Inserted module 'br_netfilter'
Nov 24 21:36:57 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 24 21:36:57 compute-0 sudo[38907]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:58 compute-0 sudo[39066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqrhfxxsajeepuavgjqchzpfheyfegtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020218.0432446-378-208669625426736/AnsiballZ_stat.py'
Nov 24 21:36:58 compute-0 sudo[39066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:58 compute-0 python3.9[39068]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:36:58 compute-0 sudo[39066]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:59 compute-0 sudo[39189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fekniczpesaunbggrjpuydjcidpkccst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020218.0432446-378-208669625426736/AnsiballZ_copy.py'
Nov 24 21:36:59 compute-0 sudo[39189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:36:59 compute-0 python3.9[39191]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764020218.0432446-378-208669625426736/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:36:59 compute-0 sudo[39189]: pam_unix(sudo:session): session closed for user root
Nov 24 21:36:59 compute-0 sudo[39341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-digbldgolznkrwfcdxnfqliwhoieevcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020219.5956533-396-16639162570676/AnsiballZ_dnf.py'
Nov 24 21:36:59 compute-0 sudo[39341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:00 compute-0 python3.9[39343]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:37:03 compute-0 dbus-broker-launch[779]: Noticed file-system modification, trigger reload.
Nov 24 21:37:03 compute-0 dbus-broker-launch[779]: Noticed file-system modification, trigger reload.
Nov 24 21:37:04 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 21:37:04 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 21:37:04 compute-0 systemd[1]: Reloading.
Nov 24 21:37:04 compute-0 systemd-rc-local-generator[39407]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:37:04 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 21:37:04 compute-0 sudo[39341]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:05 compute-0 python3.9[40844]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:37:06 compute-0 python3.9[41820]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 24 21:37:07 compute-0 python3.9[42602]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:37:08 compute-0 sudo[43391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcimbhiceepvagreqfxdjjtnwwyllpnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020227.8606508-435-170853245989990/AnsiballZ_command.py'
Nov 24 21:37:08 compute-0 sudo[43391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:08 compute-0 python3.9[43417]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:37:08 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 21:37:08 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 21:37:08 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 21:37:08 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.731s CPU time.
Nov 24 21:37:08 compute-0 systemd[1]: run-r9647e9e2838d49d590d19fc2bd270d89.service: Deactivated successfully.
Nov 24 21:37:09 compute-0 systemd[1]: Starting Authorization Manager...
Nov 24 21:37:09 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 21:37:09 compute-0 polkitd[43731]: Started polkitd version 0.117
Nov 24 21:37:09 compute-0 polkitd[43731]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 21:37:09 compute-0 polkitd[43731]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 21:37:09 compute-0 polkitd[43731]: Finished loading, compiling and executing 2 rules
Nov 24 21:37:09 compute-0 polkitd[43731]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 24 21:37:09 compute-0 systemd[1]: Started Authorization Manager.
Nov 24 21:37:09 compute-0 sudo[43391]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:10 compute-0 sudo[43899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbkooedqxszgbjspjgekxhureglhyxee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020229.627018-444-194316743039630/AnsiballZ_systemd.py'
Nov 24 21:37:10 compute-0 sudo[43899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:10 compute-0 python3.9[43901]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:37:10 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 24 21:37:10 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 24 21:37:10 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 24 21:37:10 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 21:37:10 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 21:37:10 compute-0 sudo[43899]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:11 compute-0 python3.9[44062]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 24 21:37:14 compute-0 sudo[44212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-terczqzyypweesgbeankhzwzmjtdztht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020233.7511926-501-197989218361686/AnsiballZ_systemd.py'
Nov 24 21:37:14 compute-0 sudo[44212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:14 compute-0 python3.9[44214]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:37:14 compute-0 systemd[1]: Reloading.
Nov 24 21:37:14 compute-0 systemd-rc-local-generator[44239]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:37:14 compute-0 sudo[44212]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:15 compute-0 sudo[44400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlvjuijtzfrrhralpmtvzemjbcnxmtwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020234.900231-501-131122058959970/AnsiballZ_systemd.py'
Nov 24 21:37:15 compute-0 sudo[44400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:15 compute-0 python3.9[44402]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:37:15 compute-0 systemd[1]: Reloading.
Nov 24 21:37:15 compute-0 systemd-rc-local-generator[44431]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:37:15 compute-0 sudo[44400]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:16 compute-0 sudo[44588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaglateledajsttpywhisadzqobradqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020236.1852582-517-14923407247860/AnsiballZ_command.py'
Nov 24 21:37:16 compute-0 sudo[44588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:16 compute-0 python3.9[44590]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:37:16 compute-0 sudo[44588]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:17 compute-0 sudo[44741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klfhimextmkckemesqtatyknrngjgyfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020236.9224162-525-188410427769700/AnsiballZ_command.py'
Nov 24 21:37:17 compute-0 sudo[44741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:17 compute-0 python3.9[44743]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:37:17 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 24 21:37:17 compute-0 sudo[44741]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:18 compute-0 sudo[44894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opvvdduxfxfxsbzjhaxbyjpsejgbppey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020237.754642-533-109545385333249/AnsiballZ_command.py'
Nov 24 21:37:18 compute-0 sudo[44894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:18 compute-0 python3.9[44896]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:37:19 compute-0 sudo[44894]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:20 compute-0 sudo[45056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwkcvmccqxlbvxmkqbvmitewypugraye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020240.1272762-541-168981732193208/AnsiballZ_command.py'
Nov 24 21:37:20 compute-0 sudo[45056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:21 compute-0 python3.9[45058]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:37:21 compute-0 sudo[45056]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:21 compute-0 sudo[45209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxebzsyjhetivcdhodlckkgrhdxxdhnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020241.2636638-549-169167479378612/AnsiballZ_systemd.py'
Nov 24 21:37:21 compute-0 sudo[45209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:21 compute-0 python3.9[45211]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:37:22 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 21:37:22 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 24 21:37:22 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 24 21:37:22 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 24 21:37:22 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 24 21:37:22 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 24 21:37:23 compute-0 sudo[45209]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:23 compute-0 sshd-session[31634]: Connection closed by 192.168.122.30 port 43506
Nov 24 21:37:23 compute-0 sshd-session[31631]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:37:23 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 24 21:37:23 compute-0 systemd[1]: session-10.scope: Consumed 2min 18.601s CPU time.
Nov 24 21:37:23 compute-0 systemd-logind[806]: Session 10 logged out. Waiting for processes to exit.
Nov 24 21:37:23 compute-0 systemd-logind[806]: Removed session 10.
Nov 24 21:37:29 compute-0 sshd-session[45241]: Accepted publickey for zuul from 192.168.122.30 port 58770 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:37:30 compute-0 systemd-logind[806]: New session 11 of user zuul.
Nov 24 21:37:30 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 24 21:37:30 compute-0 sshd-session[45241]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:37:31 compute-0 python3.9[45394]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:37:32 compute-0 python3.9[45548]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:37:33 compute-0 sudo[45702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esiowmpnlsxutkeucnylzvtspakmmhpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020253.3794434-50-160994668958593/AnsiballZ_command.py'
Nov 24 21:37:33 compute-0 sudo[45702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:34 compute-0 python3.9[45704]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:37:34 compute-0 sudo[45702]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:35 compute-0 python3.9[45855]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:37:36 compute-0 sudo[46009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmiagulnphxxtpqfbzzeueqzeloserse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020255.6836855-70-149679707026740/AnsiballZ_setup.py'
Nov 24 21:37:36 compute-0 sudo[46009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:36 compute-0 python3.9[46011]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:37:36 compute-0 sudo[46009]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:37 compute-0 sudo[46093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtifiyvrbbiwtjhqhpnqwfbauwgythal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020255.6836855-70-149679707026740/AnsiballZ_dnf.py'
Nov 24 21:37:37 compute-0 sudo[46093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:37 compute-0 python3.9[46095]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:37:38 compute-0 sudo[46093]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:39 compute-0 sudo[46246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvifxwkcsmbznfhneocixarybprixwzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020258.7351406-82-44356241046402/AnsiballZ_setup.py'
Nov 24 21:37:39 compute-0 sudo[46246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:39 compute-0 python3.9[46248]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:37:39 compute-0 sshd-session[46249]: Invalid user ubuntu from 45.148.10.240 port 36170
Nov 24 21:37:39 compute-0 sshd-session[46249]: Connection closed by invalid user ubuntu 45.148.10.240 port 36170 [preauth]
Nov 24 21:37:39 compute-0 sudo[46246]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:40 compute-0 sudo[46419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjohcvwksxclvbivoxmlbyyokwpeculv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020259.9897304-93-61041375130363/AnsiballZ_file.py'
Nov 24 21:37:40 compute-0 sudo[46419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:40 compute-0 python3.9[46421]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:37:40 compute-0 sudo[46419]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:41 compute-0 sudo[46571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnucmjsftdnnecoeuhksbblryxabtztx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020260.97497-101-156947806861165/AnsiballZ_command.py'
Nov 24 21:37:41 compute-0 sudo[46571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:41 compute-0 python3.9[46573]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4047638568-merged.mount: Deactivated successfully.
Nov 24 21:37:41 compute-0 podman[46574]: 2025-11-24 21:37:41.616295919 +0000 UTC m=+0.065392735 system refresh
Nov 24 21:37:41 compute-0 sudo[46571]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:42 compute-0 sudo[46734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wohsjqgskezwnkxcbwdzxegjajxygzwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020261.9125507-109-266640523212108/AnsiballZ_stat.py'
Nov 24 21:37:42 compute-0 sudo[46734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:37:42 compute-0 python3.9[46736]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:37:42 compute-0 sudo[46734]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:43 compute-0 sudo[46857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axfeydqdqkutymugjsjylnfgrvhakhxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020261.9125507-109-266640523212108/AnsiballZ_copy.py'
Nov 24 21:37:43 compute-0 sudo[46857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:43 compute-0 python3.9[46859]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020261.9125507-109-266640523212108/.source.json follow=False _original_basename=podman_network_config.j2 checksum=4ce47d030c5481b3e6dbcf69dbee2aa585247c6e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:37:43 compute-0 sudo[46857]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:44 compute-0 sudo[47009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygtawdwtjnjnxxvqesdebjesssifpfat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020263.717913-124-127203560389795/AnsiballZ_stat.py'
Nov 24 21:37:44 compute-0 sudo[47009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:44 compute-0 python3.9[47011]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:37:44 compute-0 sudo[47009]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:44 compute-0 sudo[47132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-temogrkjkhccjqpbpjjaujcqpnnxildu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020263.717913-124-127203560389795/AnsiballZ_copy.py'
Nov 24 21:37:44 compute-0 sudo[47132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:45 compute-0 python3.9[47134]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764020263.717913-124-127203560389795/.source.conf follow=False _original_basename=registries.conf.j2 checksum=d5411b5f1341849ed3ee9f32a763e9337f9d711c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:37:45 compute-0 sudo[47132]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:45 compute-0 sudo[47284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fenmgclsoevnduhnrjumgbsgulbkdgbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020265.289879-140-62262282334905/AnsiballZ_ini_file.py'
Nov 24 21:37:45 compute-0 sudo[47284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:45 compute-0 python3.9[47286]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:37:45 compute-0 sudo[47284]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:46 compute-0 sudo[47436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eofkzqvgzaizfydwzmzzyheetcmhhhvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020266.0696335-140-61236867206432/AnsiballZ_ini_file.py'
Nov 24 21:37:46 compute-0 sudo[47436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:46 compute-0 python3.9[47438]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:37:46 compute-0 sudo[47436]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:47 compute-0 sudo[47588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjopngjxesfbqnzkxclarrnkuiypionj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020266.825128-140-114075360947815/AnsiballZ_ini_file.py'
Nov 24 21:37:47 compute-0 sudo[47588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:47 compute-0 python3.9[47590]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:37:47 compute-0 sudo[47588]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:48 compute-0 sudo[47740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sswbsojnjxdwbuqxkkbwizqrsbfndenz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020267.640105-140-190710538993132/AnsiballZ_ini_file.py'
Nov 24 21:37:48 compute-0 sudo[47740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:48 compute-0 python3.9[47742]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:37:48 compute-0 sudo[47740]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:49 compute-0 python3.9[47892]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:37:49 compute-0 sudo[48044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbvnpzcmrdnjcaehncwnqzprfbohpnzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020269.5240119-180-33820627732184/AnsiballZ_dnf.py'
Nov 24 21:37:49 compute-0 sudo[48044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:50 compute-0 python3.9[48046]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 21:37:51 compute-0 sudo[48044]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:51 compute-0 sudo[48197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anryhnhxymrsbbusnzflqfceczadlxes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020271.640308-188-191470841149647/AnsiballZ_dnf.py'
Nov 24 21:37:51 compute-0 sudo[48197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:52 compute-0 python3.9[48199]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 21:37:53 compute-0 sudo[48197]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:54 compute-0 sudo[48357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idqllwjvggdkgqnzymvvdvrnqwmahumn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020274.1401424-198-220697458436814/AnsiballZ_dnf.py'
Nov 24 21:37:54 compute-0 sudo[48357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:54 compute-0 python3.9[48359]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 21:37:55 compute-0 sudo[48357]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:56 compute-0 sudo[48511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhxzpoordfvnwbhyaatkhqueukhglvnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020276.3465457-207-107671936901396/AnsiballZ_dnf.py'
Nov 24 21:37:56 compute-0 sudo[48511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:56 compute-0 python3.9[48513]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 21:37:58 compute-0 sudo[48511]: pam_unix(sudo:session): session closed for user root
Nov 24 21:37:58 compute-0 sudo[48665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esqcgamnjkfbyslksjorbozidyddhfjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020278.5580227-218-279946340748426/AnsiballZ_dnf.py'
Nov 24 21:37:58 compute-0 sudo[48665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:37:59 compute-0 python3.9[48667]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 21:37:59 compute-0 sshd-session[48437]: Connection closed by authenticating user operator 80.94.95.115 port 41770 [preauth]
Nov 24 21:38:00 compute-0 sudo[48665]: pam_unix(sudo:session): session closed for user root
Nov 24 21:38:01 compute-0 sudo[48821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exxqbdwpsrexovcrjrdlhpmwwozdbmbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020281.0535414-226-53786532506031/AnsiballZ_dnf.py'
Nov 24 21:38:01 compute-0 sudo[48821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:38:01 compute-0 python3.9[48823]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 21:38:04 compute-0 sudo[48821]: pam_unix(sudo:session): session closed for user root
Nov 24 21:38:05 compute-0 sudo[48989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueuenfbxpkevyrncattteeengivrqobx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020284.95143-235-61186939885907/AnsiballZ_dnf.py'
Nov 24 21:38:05 compute-0 sudo[48989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:38:05 compute-0 python3.9[48991]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 21:38:06 compute-0 sudo[48989]: pam_unix(sudo:session): session closed for user root
Nov 24 21:38:07 compute-0 sudo[49142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvaxggyxwhgmreretryoerokyzvfnlij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020287.1081192-244-121151571130422/AnsiballZ_dnf.py'
Nov 24 21:38:07 compute-0 sudo[49142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:38:07 compute-0 python3.9[49144]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 21:38:22 compute-0 sudo[49142]: pam_unix(sudo:session): session closed for user root
Nov 24 21:38:23 compute-0 sudo[49479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcwvnhyjgoktpjxfgxnbrvhfscnawfgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020303.0151613-253-225967851874646/AnsiballZ_dnf.py'
Nov 24 21:38:23 compute-0 sudo[49479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:38:23 compute-0 python3.9[49481]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 21:38:24 compute-0 sudo[49479]: pam_unix(sudo:session): session closed for user root
Nov 24 21:38:25 compute-0 sudo[49635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbsyowwwvvdvixocgviawotmwplzlxfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020305.3888657-264-70819765903829/AnsiballZ_file.py'
Nov 24 21:38:25 compute-0 sudo[49635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:38:25 compute-0 python3.9[49637]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:38:26 compute-0 sudo[49635]: pam_unix(sudo:session): session closed for user root
Nov 24 21:38:26 compute-0 sudo[49810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiebfstzduzzmwdlxclgmestsqnfjszj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020306.2183962-272-30733363583293/AnsiballZ_stat.py'
Nov 24 21:38:26 compute-0 sudo[49810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:38:26 compute-0 python3.9[49812]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:38:26 compute-0 sudo[49810]: pam_unix(sudo:session): session closed for user root
Nov 24 21:38:27 compute-0 sudo[49933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwytpbberihaqkinrxrmpqwnodgyjnnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020306.2183962-272-30733363583293/AnsiballZ_copy.py'
Nov 24 21:38:27 compute-0 sudo[49933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:38:27 compute-0 python3.9[49935]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764020306.2183962-272-30733363583293/.source.json _original_basename=.r2_kkfod follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:38:27 compute-0 sudo[49933]: pam_unix(sudo:session): session closed for user root
Nov 24 21:38:28 compute-0 sudo[50085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwiaeweaeenjdfawmkezomwytyjazoka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020307.8409154-290-227592190084229/AnsiballZ_podman_image.py'
Nov 24 21:38:28 compute-0 sudo[50085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:38:28 compute-0 python3.9[50087]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 24 21:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1315698181-merged.mount: Deactivated successfully.
Nov 24 21:38:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1315698181-lower\x2dmapped.mount: Deactivated successfully.
Nov 24 21:38:34 compute-0 podman[50100]: 2025-11-24 21:38:34.408783293 +0000 UTC m=+5.656135460 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 21:38:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:34 compute-0 sudo[50085]: pam_unix(sudo:session): session closed for user root
Nov 24 21:38:35 compute-0 sudo[50394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urdyyjelzsxbtfnnqhdwhacbklfnehft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020315.1269324-301-156000762282674/AnsiballZ_podman_image.py'
Nov 24 21:38:35 compute-0 sudo[50394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:38:35 compute-0 python3.9[50396]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 24 21:38:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:44 compute-0 podman[50409]: 2025-11-24 21:38:44.604043521 +0000 UTC m=+8.768423310 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 21:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:44 compute-0 sudo[50394]: pam_unix(sudo:session): session closed for user root
Nov 24 21:38:45 compute-0 sudo[50707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdhpvpgaaaurmspttwrohmonneupflgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020325.2970018-311-1274568417619/AnsiballZ_podman_image.py'
Nov 24 21:38:45 compute-0 sudo[50707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:38:45 compute-0 python3.9[50709]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 24 21:38:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:47 compute-0 podman[50720]: 2025-11-24 21:38:47.07950609 +0000 UTC m=+1.126937400 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 21:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:47 compute-0 sudo[50707]: pam_unix(sudo:session): session closed for user root
Nov 24 21:38:48 compute-0 sudo[50954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncsvvbscibakurjflgomtfmpvzjnqada ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020327.6745763-320-99515753416995/AnsiballZ_podman_image.py'
Nov 24 21:38:48 compute-0 sudo[50954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:38:48 compute-0 python3.9[50956]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 24 21:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:59 compute-0 podman[50969]: 2025-11-24 21:38:59.866086639 +0000 UTC m=+11.549948119 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 21:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:00 compute-0 sudo[50954]: pam_unix(sudo:session): session closed for user root
Nov 24 21:39:01 compute-0 sudo[51256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqfnapidtxxwuivfpmvndlbduuqryrvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020340.7189345-331-274818620938454/AnsiballZ_podman_image.py'
Nov 24 21:39:01 compute-0 sudo[51256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:39:01 compute-0 python3.9[51258]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 24 21:39:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:14 compute-0 podman[51270]: 2025-11-24 21:39:14.640882851 +0000 UTC m=+13.239668278 image pull 62d0cdbd80511c7b16dc1b12830c26126f29d8961a194546e50bdb4d0a16aab7 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 24 21:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:14 compute-0 sudo[51256]: pam_unix(sudo:session): session closed for user root
Nov 24 21:39:15 compute-0 sudo[51596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdolgkotlclqxepdkhipnrwinqwaouos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020355.1530828-331-205267021889935/AnsiballZ_podman_image.py'
Nov 24 21:39:15 compute-0 sudo[51596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:39:15 compute-0 python3.9[51598]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 24 21:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:16 compute-0 podman[51610]: 2025-11-24 21:39:16.948570753 +0000 UTC m=+1.135038153 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 24 21:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:17 compute-0 sudo[51596]: pam_unix(sudo:session): session closed for user root
Nov 24 21:39:17 compute-0 sudo[51880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgnmrckzsjpjkrwenvrbkebqpwgewnym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020357.4439769-347-218606484792362/AnsiballZ_podman_image.py'
Nov 24 21:39:17 compute-0 sudo[51880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:39:18 compute-0 python3.9[51882]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 24 21:39:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:21 compute-0 podman[51895]: 2025-11-24 21:39:21.069554868 +0000 UTC m=+2.766057537 image pull 02e0056780c6b31017996766cd13000137ba644dac3fc851da034db8cf4ceb2c quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 24 21:39:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:21 compute-0 sudo[51880]: pam_unix(sudo:session): session closed for user root
Nov 24 21:39:21 compute-0 sudo[52148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpllsskpskuhxwjjshnovcueregkwhru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020361.5728745-347-152265868373492/AnsiballZ_podman_image.py'
Nov 24 21:39:21 compute-0 sudo[52148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:39:22 compute-0 python3.9[52150]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 24 21:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:27 compute-0 podman[52163]: 2025-11-24 21:39:27.813712379 +0000 UTC m=+5.616067136 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 24 21:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:39:28 compute-0 sudo[52148]: pam_unix(sudo:session): session closed for user root
Nov 24 21:39:28 compute-0 sshd-session[45244]: Connection closed by 192.168.122.30 port 58770
Nov 24 21:39:28 compute-0 sshd-session[45241]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:39:28 compute-0 systemd-logind[806]: Session 11 logged out. Waiting for processes to exit.
Nov 24 21:39:28 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 24 21:39:28 compute-0 systemd[1]: session-11.scope: Consumed 2min 39.649s CPU time.
Nov 24 21:39:28 compute-0 systemd-logind[806]: Removed session 11.
Nov 24 21:39:34 compute-0 sshd-session[52415]: Accepted publickey for zuul from 192.168.122.30 port 35834 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:39:34 compute-0 systemd-logind[806]: New session 12 of user zuul.
Nov 24 21:39:34 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 24 21:39:35 compute-0 sshd-session[52415]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:39:36 compute-0 python3.9[52568]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:39:37 compute-0 sudo[52722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlbfifqfuxnyixbjdedhzdkfjquccxyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020376.7621925-36-245918946274050/AnsiballZ_getent.py'
Nov 24 21:39:37 compute-0 sudo[52722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:39:37 compute-0 python3.9[52724]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 24 21:39:37 compute-0 sudo[52722]: pam_unix(sudo:session): session closed for user root
Nov 24 21:39:38 compute-0 sudo[52875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvfeufenbasujsqlikaagsxqkldmmofz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020377.8694422-44-44838657331866/AnsiballZ_group.py'
Nov 24 21:39:38 compute-0 sudo[52875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:39:38 compute-0 python3.9[52877]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 21:39:38 compute-0 groupadd[52878]: group added to /etc/group: name=openvswitch, GID=42476
Nov 24 21:39:38 compute-0 groupadd[52878]: group added to /etc/gshadow: name=openvswitch
Nov 24 21:39:38 compute-0 groupadd[52878]: new group: name=openvswitch, GID=42476
Nov 24 21:39:38 compute-0 sudo[52875]: pam_unix(sudo:session): session closed for user root
Nov 24 21:39:39 compute-0 sudo[53033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nedmnahnbpcbvxqfnfcddfgsbqouwprh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020378.8514142-52-28591552321997/AnsiballZ_user.py'
Nov 24 21:39:39 compute-0 sudo[53033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:39:39 compute-0 python3.9[53035]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 21:39:39 compute-0 useradd[53037]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 21:39:39 compute-0 useradd[53037]: add 'openvswitch' to group 'hugetlbfs'
Nov 24 21:39:39 compute-0 useradd[53037]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 24 21:39:39 compute-0 sudo[53033]: pam_unix(sudo:session): session closed for user root
Nov 24 21:39:40 compute-0 sudo[53193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzissjphhzkjlvwqfcuufwvedgrqwhhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020380.1197996-62-191263220515167/AnsiballZ_setup.py'
Nov 24 21:39:40 compute-0 sudo[53193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:39:40 compute-0 python3.9[53195]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:39:41 compute-0 sudo[53193]: pam_unix(sudo:session): session closed for user root
Nov 24 21:39:41 compute-0 sudo[53279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igdntiukguswjqmfsugfuvsnoqsqwhbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020380.1197996-62-191263220515167/AnsiballZ_dnf.py'
Nov 24 21:39:41 compute-0 sudo[53279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:39:41 compute-0 sshd-session[53204]: Invalid user ubuntu from 45.148.10.240 port 55854
Nov 24 21:39:41 compute-0 sshd-session[53204]: Connection closed by invalid user ubuntu 45.148.10.240 port 55854 [preauth]
Nov 24 21:39:41 compute-0 python3.9[53281]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 21:39:43 compute-0 sudo[53279]: pam_unix(sudo:session): session closed for user root
Nov 24 21:39:44 compute-0 sudo[53440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxzbijdxktucudkogxumnupvfdycptru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020383.612545-76-235929367915875/AnsiballZ_dnf.py'
Nov 24 21:39:44 compute-0 sudo[53440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:39:44 compute-0 python3.9[53442]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:39:56 compute-0 kernel: SELinux:  Converting 2732 SID table entries...
Nov 24 21:39:56 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 21:39:56 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 21:39:56 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 21:39:56 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 21:39:56 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 21:39:56 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 21:39:56 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 21:39:56 compute-0 groupadd[53466]: group added to /etc/group: name=unbound, GID=993
Nov 24 21:39:56 compute-0 groupadd[53466]: group added to /etc/gshadow: name=unbound
Nov 24 21:39:56 compute-0 groupadd[53466]: new group: name=unbound, GID=993
Nov 24 21:39:56 compute-0 useradd[53473]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 24 21:39:56 compute-0 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 24 21:39:56 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 24 21:39:57 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 21:39:57 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 21:39:57 compute-0 systemd[1]: Reloading.
Nov 24 21:39:58 compute-0 systemd-rc-local-generator[53972]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:39:58 compute-0 systemd-sysv-generator[53976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:39:58 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 21:39:58 compute-0 sudo[53440]: pam_unix(sudo:session): session closed for user root
Nov 24 21:39:58 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 21:39:58 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 21:39:58 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.054s CPU time.
Nov 24 21:39:58 compute-0 systemd[1]: run-r74bf5abe197e4b12b5bbfe3465330b97.service: Deactivated successfully.
Nov 24 21:39:59 compute-0 sudo[54539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deauiztssptvoblozulbdwisxrdtiyvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020399.0362048-84-188017558547384/AnsiballZ_systemd.py'
Nov 24 21:39:59 compute-0 sudo[54539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:00 compute-0 python3.9[54541]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 21:40:00 compute-0 systemd[1]: Reloading.
Nov 24 21:40:00 compute-0 systemd-rc-local-generator[54565]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:40:00 compute-0 systemd-sysv-generator[54570]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:40:00 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 24 21:40:00 compute-0 chown[54583]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 24 21:40:00 compute-0 ovs-ctl[54588]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 24 21:40:00 compute-0 ovs-ctl[54588]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 24 21:40:00 compute-0 ovs-ctl[54588]: Starting ovsdb-server [  OK  ]
Nov 24 21:40:00 compute-0 ovs-vsctl[54637]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 24 21:40:01 compute-0 ovs-vsctl[54657]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d2f80616-70e9-484c-836d-1edab81fe5d9\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 24 21:40:01 compute-0 ovs-ctl[54588]: Configuring Open vSwitch system IDs [  OK  ]
Nov 24 21:40:01 compute-0 ovs-vsctl[54663]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 24 21:40:01 compute-0 ovs-ctl[54588]: Enabling remote OVSDB managers [  OK  ]
Nov 24 21:40:01 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 24 21:40:01 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 24 21:40:01 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 24 21:40:01 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 24 21:40:01 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 24 21:40:01 compute-0 ovs-ctl[54708]: Inserting openvswitch module [  OK  ]
Nov 24 21:40:02 compute-0 ovs-ctl[54677]: Starting ovs-vswitchd [  OK  ]
Nov 24 21:40:02 compute-0 ovs-vsctl[54725]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 24 21:40:02 compute-0 ovs-ctl[54677]: Enabling remote OVSDB managers [  OK  ]
Nov 24 21:40:02 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 24 21:40:02 compute-0 systemd[1]: Starting Open vSwitch...
Nov 24 21:40:02 compute-0 systemd[1]: Finished Open vSwitch.
Nov 24 21:40:02 compute-0 sudo[54539]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:03 compute-0 python3.9[54877]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:40:03 compute-0 sudo[55027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yphsszmmnclxueuzqtwrycalhnrbcunv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020403.3059356-102-163745887559195/AnsiballZ_sefcontext.py'
Nov 24 21:40:03 compute-0 sudo[55027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:04 compute-0 python3.9[55029]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 24 21:40:05 compute-0 kernel: SELinux:  Converting 2746 SID table entries...
Nov 24 21:40:05 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 21:40:05 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 21:40:05 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 21:40:05 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 21:40:05 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 21:40:05 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 21:40:05 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 21:40:05 compute-0 sudo[55027]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:06 compute-0 python3.9[55184]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:40:07 compute-0 sudo[55340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juyqtdmbofckipdeyqtkskwpxyvqwyex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020406.9194348-120-174986342359789/AnsiballZ_dnf.py'
Nov 24 21:40:07 compute-0 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 24 21:40:07 compute-0 sudo[55340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:07 compute-0 python3.9[55342]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:40:08 compute-0 sudo[55340]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:09 compute-0 sudo[55493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oczuloptchemtqfwgxkbasxdrhccnits ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020409.0996644-128-258567194689057/AnsiballZ_command.py'
Nov 24 21:40:09 compute-0 sudo[55493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:09 compute-0 python3.9[55495]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:40:10 compute-0 sudo[55493]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:11 compute-0 sudo[55780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbeyzqqmlcbwtpfvbdexqpodxhwsyqbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020410.841959-136-203470242894960/AnsiballZ_file.py'
Nov 24 21:40:11 compute-0 sudo[55780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:11 compute-0 python3.9[55782]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 21:40:11 compute-0 sudo[55780]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:12 compute-0 python3.9[55932]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:40:13 compute-0 sudo[56084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taxpvuyqjlzumqaszkxwencemkzrqyax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020412.8599815-152-226441134041926/AnsiballZ_dnf.py'
Nov 24 21:40:13 compute-0 sudo[56084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:13 compute-0 python3.9[56086]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:40:15 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 21:40:15 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 21:40:15 compute-0 systemd[1]: Reloading.
Nov 24 21:40:15 compute-0 systemd-rc-local-generator[56127]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:40:15 compute-0 systemd-sysv-generator[56131]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:40:15 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 21:40:15 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 21:40:15 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 21:40:15 compute-0 systemd[1]: run-raf0cbf099e0d48469bf9c9832091b1ac.service: Deactivated successfully.
Nov 24 21:40:15 compute-0 sudo[56084]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:16 compute-0 sudo[56403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfeciobzhgeebngxwzffffccjmyfzyvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020416.0324724-160-128038111996405/AnsiballZ_systemd.py'
Nov 24 21:40:16 compute-0 sudo[56403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:16 compute-0 python3.9[56405]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:40:16 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 21:40:16 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 24 21:40:16 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 24 21:40:16 compute-0 systemd[1]: Stopping Network Manager...
Nov 24 21:40:16 compute-0 NetworkManager[7199]: <info>  [1764020416.8241] caught SIGTERM, shutting down normally.
Nov 24 21:40:16 compute-0 NetworkManager[7199]: <info>  [1764020416.8259] dhcp4 (eth0): canceled DHCP transaction
Nov 24 21:40:16 compute-0 NetworkManager[7199]: <info>  [1764020416.8259] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 21:40:16 compute-0 NetworkManager[7199]: <info>  [1764020416.8259] dhcp4 (eth0): state changed no lease
Nov 24 21:40:16 compute-0 NetworkManager[7199]: <info>  [1764020416.8261] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 21:40:16 compute-0 NetworkManager[7199]: <info>  [1764020416.8337] exiting (success)
Nov 24 21:40:16 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 21:40:16 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 21:40:16 compute-0 systemd[1]: Stopped Network Manager.
Nov 24 21:40:16 compute-0 systemd[1]: NetworkManager.service: Consumed 13.935s CPU time, 4.1M memory peak, read 0B from disk, written 27.5K to disk.
Nov 24 21:40:16 compute-0 systemd[1]: Starting Network Manager...
Nov 24 21:40:16 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.8970] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:6af3ca85-64ae-4c3b-bcae-1314bd1d1259)
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.8971] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9038] manager[0x55d9d0133090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 21:40:16 compute-0 systemd[1]: Starting Hostname Service...
Nov 24 21:40:16 compute-0 systemd[1]: Started Hostname Service.
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9809] hostname: hostname: using hostnamed
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9810] hostname: static hostname changed from (none) to "compute-0"
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9815] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9820] manager[0x55d9d0133090]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9821] manager[0x55d9d0133090]: rfkill: WWAN hardware radio set enabled
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9839] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9848] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9848] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9848] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9849] manager: Networking is enabled by state file
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9851] settings: Loaded settings plugin: keyfile (internal)
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9854] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9880] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9889] dhcp: init: Using DHCP client 'internal'
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9892] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9897] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9901] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9908] device (lo): Activation: starting connection 'lo' (9e721d51-16df-4701-8382-ea90a88a1946)
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9914] device (eth0): carrier: link connected
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9918] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9923] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9923] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9928] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9934] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9939] device (eth1): carrier: link connected
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9942] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9946] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (991fe21a-95d5-515f-bd84-4bf4dd24e652) (indicated)
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9947] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9950] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9956] device (eth1): Activation: starting connection 'ci-private-network' (991fe21a-95d5-515f-bd84-4bf4dd24e652)
Nov 24 21:40:16 compute-0 systemd[1]: Started Network Manager.
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9963] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9974] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9977] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9978] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9980] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9982] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9985] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9988] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9991] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 21:40:16 compute-0 NetworkManager[56413]: <info>  [1764020416.9996] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0005] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0020] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0043] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0056] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0060] dhcp4 (eth0): state changed new lease, address=38.102.83.66
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0066] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0075] device (lo): Activation: successful, device activated.
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0090] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 21:40:17 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 24 21:40:17 compute-0 sudo[56403]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0758] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0793] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0803] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0809] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0815] device (eth1): Activation: successful, device activated.
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0884] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0887] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0892] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0897] device (eth0): Activation: successful, device activated.
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0907] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 21:40:17 compute-0 NetworkManager[56413]: <info>  [1764020417.0949] manager: startup complete
Nov 24 21:40:17 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 24 21:40:17 compute-0 sudo[56630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjgnsftwarefketnwqlspmoirvkmouuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020417.2684748-168-97897325925474/AnsiballZ_dnf.py'
Nov 24 21:40:17 compute-0 sudo[56630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:17 compute-0 python3.9[56632]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:40:22 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 21:40:22 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 21:40:22 compute-0 systemd[1]: Reloading.
Nov 24 21:40:22 compute-0 systemd-sysv-generator[56684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:40:22 compute-0 systemd-rc-local-generator[56681]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:40:23 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 21:40:24 compute-0 sudo[56630]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:24 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 21:40:24 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 21:40:24 compute-0 systemd[1]: run-r4a8d72a114bf40b9b928316d87856ef9.service: Deactivated successfully.
Nov 24 21:40:25 compute-0 sudo[57088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-devxdarevflddaxsvikbzcqqwwqcevzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020424.8734705-180-64770177844439/AnsiballZ_stat.py'
Nov 24 21:40:25 compute-0 sudo[57088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:25 compute-0 python3.9[57090]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:40:25 compute-0 sudo[57088]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:26 compute-0 sudo[57240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njnuueopiddfcqwmvmscfjccsbfekzhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020425.6227472-189-150103725898048/AnsiballZ_ini_file.py'
Nov 24 21:40:26 compute-0 sudo[57240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:26 compute-0 python3.9[57242]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:40:26 compute-0 sudo[57240]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:26 compute-0 sudo[57394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgyvjbpygjrejxmsrcrqbnumwkwnapml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020426.6279547-199-81061358642807/AnsiballZ_ini_file.py'
Nov 24 21:40:26 compute-0 sudo[57394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:27 compute-0 python3.9[57396]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:40:27 compute-0 sudo[57394]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:27 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 21:40:27 compute-0 sudo[57546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sycuttymlcnzcvbfzuzmpclvnjkxfbqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020427.3770428-199-24496366894943/AnsiballZ_ini_file.py'
Nov 24 21:40:27 compute-0 sudo[57546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:27 compute-0 python3.9[57548]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:40:27 compute-0 sudo[57546]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:28 compute-0 sudo[57698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slrwonywrthtalqowutnqrqzriyitolx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020428.3193192-214-129947466766115/AnsiballZ_ini_file.py'
Nov 24 21:40:28 compute-0 sudo[57698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:28 compute-0 python3.9[57700]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:40:28 compute-0 sudo[57698]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:29 compute-0 sudo[57850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kglwydmcwkluzbcknqceoseskgcnakqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020429.0048938-214-121465194848196/AnsiballZ_ini_file.py'
Nov 24 21:40:29 compute-0 sudo[57850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:29 compute-0 python3.9[57852]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:40:29 compute-0 sudo[57850]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:30 compute-0 sudo[58002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taqvvwwobqnxdztiufscoavtupxvnbhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020429.7580373-229-155943697171850/AnsiballZ_stat.py'
Nov 24 21:40:30 compute-0 sudo[58002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:30 compute-0 python3.9[58004]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:40:30 compute-0 sudo[58002]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:30 compute-0 sudo[58125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imqbtpndfclqlivamfrwqhnwcvfdbste ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020429.7580373-229-155943697171850/AnsiballZ_copy.py'
Nov 24 21:40:30 compute-0 sudo[58125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:31 compute-0 python3.9[58127]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764020429.7580373-229-155943697171850/.source _original_basename=._3zug9bu follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:40:31 compute-0 sudo[58125]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:31 compute-0 sudo[58277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnjrsibpqbpsgxgaxdmvypexpbsefkmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020431.3112109-244-983779398926/AnsiballZ_file.py'
Nov 24 21:40:31 compute-0 sudo[58277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:31 compute-0 python3.9[58279]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:40:31 compute-0 sudo[58277]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:32 compute-0 sudo[58429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqwzezinelapohbtxvakuowuewngmzio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020432.05423-252-98406473627382/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 24 21:40:32 compute-0 sudo[58429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:32 compute-0 python3.9[58431]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 24 21:40:32 compute-0 sudo[58429]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:33 compute-0 sudo[58581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrqblbdpgiyqklfnkepvdjmnpenndaub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020433.0627918-261-15224327814347/AnsiballZ_file.py'
Nov 24 21:40:33 compute-0 sudo[58581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:33 compute-0 python3.9[58583]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:40:33 compute-0 sudo[58581]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:34 compute-0 sudo[58733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbtnzekyesswpheiojiyhhqugqtkivqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020434.0710914-271-54212899897063/AnsiballZ_stat.py'
Nov 24 21:40:34 compute-0 sudo[58733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:34 compute-0 sudo[58733]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:35 compute-0 sudo[58856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiwcdgdpfewdfdutqaoveijievvbbxep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020434.0710914-271-54212899897063/AnsiballZ_copy.py'
Nov 24 21:40:35 compute-0 sudo[58856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:35 compute-0 sudo[58856]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:35 compute-0 sudo[59008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqupxdggnwnrpsxvarvegpahpbzxcuiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020435.4325945-286-107488152811546/AnsiballZ_slurp.py'
Nov 24 21:40:35 compute-0 sudo[59008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:36 compute-0 python3.9[59010]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 24 21:40:36 compute-0 sudo[59008]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:37 compute-0 sudo[59183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lakunrolesteenejxmgeogfkdxhcbuun ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020436.4894695-295-123922129555439/async_wrapper.py j6041788904 300 /home/zuul/.ansible/tmp/ansible-tmp-1764020436.4894695-295-123922129555439/AnsiballZ_edpm_os_net_config.py _'
Nov 24 21:40:37 compute-0 sudo[59183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:37 compute-0 ansible-async_wrapper.py[59185]: Invoked with j6041788904 300 /home/zuul/.ansible/tmp/ansible-tmp-1764020436.4894695-295-123922129555439/AnsiballZ_edpm_os_net_config.py _
Nov 24 21:40:37 compute-0 ansible-async_wrapper.py[59188]: Starting module and watcher
Nov 24 21:40:37 compute-0 ansible-async_wrapper.py[59188]: Start watching 59189 (300)
Nov 24 21:40:37 compute-0 ansible-async_wrapper.py[59189]: Start module (59189)
Nov 24 21:40:37 compute-0 ansible-async_wrapper.py[59185]: Return async_wrapper task started.
Nov 24 21:40:37 compute-0 sudo[59183]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:37 compute-0 python3.9[59190]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 24 21:40:38 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 24 21:40:38 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 24 21:40:38 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 24 21:40:38 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 24 21:40:38 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.7802] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.7822] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8470] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8472] audit: op="connection-add" uuid="54500ef9-10b2-457f-a47a-2f68b209c35c" name="br-ex-br" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8495] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8496] audit: op="connection-add" uuid="4d6109e1-84be-4c33-b6b7-43759843fead" name="br-ex-port" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8515] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8517] audit: op="connection-add" uuid="e67fa613-b23e-4982-b594-78f9727c2c3e" name="eth1-port" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8536] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8539] audit: op="connection-add" uuid="581684d0-4af9-4657-a65c-97c379ca0b95" name="vlan20-port" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8557] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8559] audit: op="connection-add" uuid="dfe2943f-5109-439d-b261-de0297958e9c" name="vlan21-port" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8577] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8579] audit: op="connection-add" uuid="b25346ab-af5e-4786-84bb-50a967225432" name="vlan22-port" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8610] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8639] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8641] audit: op="connection-add" uuid="11fa4a5d-1e5e-4315-b9f2-ccffe063ad4f" name="br-ex-if" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8720] audit: op="connection-update" uuid="991fe21a-95d5-515f-bd84-4bf4dd24e652" name="ci-private-network" args="connection.slave-type,connection.master,connection.controller,connection.timestamp,connection.port-type,ovs-interface.type,ipv6.dns,ipv6.addr-gen-mode,ipv6.addresses,ipv6.routes,ipv6.method,ipv6.routing-rules,ipv4.dns,ipv4.addresses,ipv4.method,ipv4.routing-rules,ipv4.routes,ipv4.never-default,ovs-external-ids.data" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8747] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8749] audit: op="connection-add" uuid="e998d2cd-622c-48a5-ae78-d9c2d1487b46" name="vlan20-if" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8776] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8778] audit: op="connection-add" uuid="ca37e9fe-12ce-4a35-a6eb-a040b7040aef" name="vlan21-if" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8806] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8808] audit: op="connection-add" uuid="da2253a1-ee23-4f8d-abb0-c0b66fdb5c4a" name="vlan22-if" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8828] audit: op="connection-delete" uuid="e2d4b87a-173d-3177-ab54-3ebbe6b2891a" name="Wired connection 1" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8848] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8862] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8868] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (54500ef9-10b2-457f-a47a-2f68b209c35c)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8868] audit: op="connection-activate" uuid="54500ef9-10b2-457f-a47a-2f68b209c35c" name="br-ex-br" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8871] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8881] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8886] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (4d6109e1-84be-4c33-b6b7-43759843fead)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8889] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8899] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8904] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (e67fa613-b23e-4982-b594-78f9727c2c3e)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8907] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8916] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8922] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (581684d0-4af9-4657-a65c-97c379ca0b95)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8925] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8935] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8940] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (dfe2943f-5109-439d-b261-de0297958e9c)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8942] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8953] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8959] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (b25346ab-af5e-4786-84bb-50a967225432)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8960] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8963] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8966] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8975] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8981] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8987] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (11fa4a5d-1e5e-4315-b9f2-ccffe063ad4f)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8988] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8992] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8995] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8997] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.8998] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9014] device (eth1): disconnecting for new activation request.
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9015] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9019] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9021] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9024] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9028] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9036] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9041] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (e998d2cd-622c-48a5-ae78-d9c2d1487b46)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9041] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9045] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9047] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9049] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9051] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9057] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9062] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (ca37e9fe-12ce-4a35-a6eb-a040b7040aef)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9062] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9066] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9068] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9069] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9072] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9078] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9083] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (da2253a1-ee23-4f8d-abb0-c0b66fdb5c4a)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9084] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9088] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9091] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9093] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9095] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9110] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9112] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9117] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9119] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9128] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9134] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9140] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9144] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9146] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9151] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9157] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9162] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9164] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9172] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9178] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 systemd-udevd[59196]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 21:40:39 compute-0 kernel: Timeout policy base is empty
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9183] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9185] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9192] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9197] dhcp4 (eth0): canceled DHCP transaction
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9197] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9197] dhcp4 (eth0): state changed no lease
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9199] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9210] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9214] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59191 uid=0 result="fail" reason="Device is not activated"
Nov 24 21:40:39 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9253] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9257] dhcp4 (eth0): state changed new lease, address=38.102.83.66
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9265] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9309] device (eth1): disconnecting for new activation request.
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9310] audit: op="connection-activate" uuid="991fe21a-95d5-515f-bd84-4bf4dd24e652" name="ci-private-network" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9312] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9361] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59191 uid=0 result="success"
Nov 24 21:40:39 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9435] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 24 21:40:39 compute-0 kernel: br-ex: entered promiscuous mode
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9604] device (eth1): Activation: starting connection 'ci-private-network' (991fe21a-95d5-515f-bd84-4bf4dd24e652)
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9619] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9623] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9638] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9641] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9643] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9645] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9647] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9649] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9669] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9677] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9683] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9688] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9694] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9699] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9705] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9710] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9717] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9722] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9728] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 21:40:39 compute-0 kernel: vlan22: entered promiscuous mode
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9734] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9740] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9750] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 24 21:40:39 compute-0 systemd-udevd[59195]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9759] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9767] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9786] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9840] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9842] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9845] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9851] device (eth1): Activation: successful, device activated.
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9856] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9862] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9875] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 24 21:40:39 compute-0 kernel: vlan21: entered promiscuous mode
Nov 24 21:40:39 compute-0 systemd-udevd[59198]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9893] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9934] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9935] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 21:40:39 compute-0 NetworkManager[56413]: <info>  [1764020439.9940] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 21:40:40 compute-0 kernel: vlan20: entered promiscuous mode
Nov 24 21:40:40 compute-0 NetworkManager[56413]: <info>  [1764020440.0006] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 24 21:40:40 compute-0 NetworkManager[56413]: <info>  [1764020440.0015] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 21:40:40 compute-0 NetworkManager[56413]: <info>  [1764020440.0055] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 21:40:40 compute-0 NetworkManager[56413]: <info>  [1764020440.0056] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 21:40:40 compute-0 NetworkManager[56413]: <info>  [1764020440.0061] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 21:40:40 compute-0 NetworkManager[56413]: <info>  [1764020440.0095] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 24 21:40:40 compute-0 NetworkManager[56413]: <info>  [1764020440.0103] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 21:40:40 compute-0 NetworkManager[56413]: <info>  [1764020440.0145] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 21:40:40 compute-0 NetworkManager[56413]: <info>  [1764020440.0146] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 21:40:40 compute-0 NetworkManager[56413]: <info>  [1764020440.0151] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 21:40:40 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 24 21:40:41 compute-0 NetworkManager[56413]: <info>  [1764020441.1528] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59191 uid=0 result="success"
Nov 24 21:40:41 compute-0 sudo[59521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hawvwbpckyquwhcgcegwqyuzrugvkaxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020440.6452286-295-166543531229493/AnsiballZ_async_status.py'
Nov 24 21:40:41 compute-0 sudo[59521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:41 compute-0 NetworkManager[56413]: <info>  [1764020441.3537] checkpoint[0x55d9d010a950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 24 21:40:41 compute-0 NetworkManager[56413]: <info>  [1764020441.3540] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59191 uid=0 result="success"
Nov 24 21:40:41 compute-0 python3.9[59523]: ansible-ansible.legacy.async_status Invoked with jid=j6041788904.59185 mode=status _async_dir=/root/.ansible_async
Nov 24 21:40:41 compute-0 sudo[59521]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:41 compute-0 NetworkManager[56413]: <info>  [1764020441.6986] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59191 uid=0 result="success"
Nov 24 21:40:41 compute-0 NetworkManager[56413]: <info>  [1764020441.7003] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59191 uid=0 result="success"
Nov 24 21:40:41 compute-0 NetworkManager[56413]: <info>  [1764020441.8977] audit: op="networking-control" arg="global-dns-configuration" pid=59191 uid=0 result="success"
Nov 24 21:40:41 compute-0 NetworkManager[56413]: <info>  [1764020441.9020] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 24 21:40:41 compute-0 NetworkManager[56413]: <info>  [1764020441.9058] audit: op="networking-control" arg="global-dns-configuration" pid=59191 uid=0 result="success"
Nov 24 21:40:41 compute-0 NetworkManager[56413]: <info>  [1764020441.9095] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59191 uid=0 result="success"
Nov 24 21:40:42 compute-0 NetworkManager[56413]: <info>  [1764020442.0733] checkpoint[0x55d9d010aa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 24 21:40:42 compute-0 NetworkManager[56413]: <info>  [1764020442.0741] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59191 uid=0 result="success"
Nov 24 21:40:42 compute-0 ansible-async_wrapper.py[59189]: Module complete (59189)
Nov 24 21:40:42 compute-0 ansible-async_wrapper.py[59188]: Done in kid B.
Nov 24 21:40:44 compute-0 sudo[59628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smruartgmhfhfzingvqvtotydenbgnzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020440.6452286-295-166543531229493/AnsiballZ_async_status.py'
Nov 24 21:40:44 compute-0 sudo[59628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:45 compute-0 python3.9[59630]: ansible-ansible.legacy.async_status Invoked with jid=j6041788904.59185 mode=status _async_dir=/root/.ansible_async
Nov 24 21:40:45 compute-0 sudo[59628]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:45 compute-0 sudo[59727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mohggtcovtmdopbgjctaxofdwuuzrtkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020440.6452286-295-166543531229493/AnsiballZ_async_status.py'
Nov 24 21:40:45 compute-0 sudo[59727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:45 compute-0 python3.9[59729]: ansible-ansible.legacy.async_status Invoked with jid=j6041788904.59185 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 21:40:45 compute-0 sudo[59727]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:46 compute-0 sudo[59879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uphqdapgslmkzedxqlfznwkgvfyizudm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020445.9802709-322-119267614325797/AnsiballZ_stat.py'
Nov 24 21:40:46 compute-0 sudo[59879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:46 compute-0 python3.9[59881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:40:46 compute-0 sudo[59879]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:47 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 21:40:47 compute-0 sudo[60004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsouelcojrlgvmvynwldmhaithvitwdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020445.9802709-322-119267614325797/AnsiballZ_copy.py'
Nov 24 21:40:47 compute-0 sudo[60004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:47 compute-0 python3.9[60006]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764020445.9802709-322-119267614325797/.source.returncode _original_basename=.9ahzgue0 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:40:47 compute-0 sudo[60004]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:48 compute-0 sudo[60157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imvhpjjygedcmzfozruxykopjasgekvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020447.6729915-338-71150327114662/AnsiballZ_stat.py'
Nov 24 21:40:48 compute-0 sudo[60157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:48 compute-0 python3.9[60159]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:40:48 compute-0 sudo[60157]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:48 compute-0 sudo[60280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcgugdfsunydwjtetuziysntxekkfbmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020447.6729915-338-71150327114662/AnsiballZ_copy.py'
Nov 24 21:40:48 compute-0 sudo[60280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:48 compute-0 python3.9[60282]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764020447.6729915-338-71150327114662/.source.cfg _original_basename=.tb8j_oaj follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:40:48 compute-0 sudo[60280]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:49 compute-0 sudo[60432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqxluqwjalbmyfqcmwlvfxgpzecmtpkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020449.179483-353-164574870977030/AnsiballZ_systemd.py'
Nov 24 21:40:49 compute-0 sudo[60432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:40:49 compute-0 python3.9[60434]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:40:49 compute-0 systemd[1]: Reloading Network Manager...
Nov 24 21:40:50 compute-0 NetworkManager[56413]: <info>  [1764020450.0333] audit: op="reload" arg="0" pid=60438 uid=0 result="success"
Nov 24 21:40:50 compute-0 NetworkManager[56413]: <info>  [1764020450.0340] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 24 21:40:50 compute-0 systemd[1]: Reloaded Network Manager.
Nov 24 21:40:50 compute-0 sudo[60432]: pam_unix(sudo:session): session closed for user root
Nov 24 21:40:50 compute-0 sshd-session[52418]: Connection closed by 192.168.122.30 port 35834
Nov 24 21:40:50 compute-0 sshd-session[52415]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:40:50 compute-0 systemd-logind[806]: Session 12 logged out. Waiting for processes to exit.
Nov 24 21:40:50 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 24 21:40:50 compute-0 systemd[1]: session-12.scope: Consumed 54.222s CPU time.
Nov 24 21:40:50 compute-0 systemd-logind[806]: Removed session 12.
Nov 24 21:40:56 compute-0 sshd-session[60469]: Accepted publickey for zuul from 192.168.122.30 port 38498 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:40:56 compute-0 systemd-logind[806]: New session 13 of user zuul.
Nov 24 21:40:56 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 24 21:40:56 compute-0 sshd-session[60469]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:40:57 compute-0 python3.9[60622]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:40:58 compute-0 python3.9[60777]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:40:59 compute-0 python3.9[60966]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:41:00 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 21:41:00 compute-0 sshd-session[60472]: Connection closed by 192.168.122.30 port 38498
Nov 24 21:41:00 compute-0 sshd-session[60469]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:41:00 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 24 21:41:00 compute-0 systemd[1]: session-13.scope: Consumed 2.558s CPU time.
Nov 24 21:41:00 compute-0 systemd-logind[806]: Session 13 logged out. Waiting for processes to exit.
Nov 24 21:41:00 compute-0 systemd-logind[806]: Removed session 13.
Nov 24 21:41:05 compute-0 sshd-session[60995]: Accepted publickey for zuul from 192.168.122.30 port 50100 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:41:06 compute-0 systemd-logind[806]: New session 14 of user zuul.
Nov 24 21:41:06 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 24 21:41:06 compute-0 sshd-session[60995]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:41:07 compute-0 python3.9[61148]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:41:08 compute-0 python3.9[61303]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:41:09 compute-0 sudo[61457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynsabtvqzwpzvprthmkvgmyduintwobt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020468.7145188-40-42680079073470/AnsiballZ_setup.py'
Nov 24 21:41:09 compute-0 sudo[61457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:09 compute-0 python3.9[61459]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:41:09 compute-0 sudo[61457]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:10 compute-0 sudo[61541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wglxjhemgbzojhcketyqfbolvmsxogdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020468.7145188-40-42680079073470/AnsiballZ_dnf.py'
Nov 24 21:41:10 compute-0 sudo[61541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:10 compute-0 python3.9[61543]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:41:11 compute-0 sudo[61541]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:12 compute-0 sudo[61695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwwksrkhopwomcnzwxlvwuojgmntquqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020471.7933297-52-201729841004592/AnsiballZ_setup.py'
Nov 24 21:41:12 compute-0 sudo[61695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:12 compute-0 python3.9[61697]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:41:12 compute-0 sudo[61695]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:13 compute-0 sudo[61886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyegofwgmtwmwhcqkzjlaygpeyumnamb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020473.156988-63-261819212486282/AnsiballZ_file.py'
Nov 24 21:41:13 compute-0 sudo[61886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:13 compute-0 python3.9[61888]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:41:13 compute-0 sudo[61886]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:14 compute-0 sudo[62038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obfgtayssxujmuijbbtyncqtkmtdhrrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020474.0499654-71-205699662539945/AnsiballZ_command.py'
Nov 24 21:41:14 compute-0 sudo[62038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:14 compute-0 python3.9[62040]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:41:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:41:14 compute-0 sudo[62038]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:15 compute-0 sudo[62202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecdxihrgohfwnegogmgsoxmlizymebes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020475.181418-79-8602373892390/AnsiballZ_stat.py'
Nov 24 21:41:15 compute-0 sudo[62202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:15 compute-0 python3.9[62204]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:41:15 compute-0 sudo[62202]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:16 compute-0 sudo[62280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvwoleweljumbpdssftpyjultpztbmno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020475.181418-79-8602373892390/AnsiballZ_file.py'
Nov 24 21:41:16 compute-0 sudo[62280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:16 compute-0 python3.9[62282]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:41:16 compute-0 sudo[62280]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:16 compute-0 sudo[62432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qylfrdffpsgwwohvgmxvqwubgkidfxfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020476.5274105-91-263301206387927/AnsiballZ_stat.py'
Nov 24 21:41:16 compute-0 sudo[62432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:17 compute-0 python3.9[62434]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:41:17 compute-0 sudo[62432]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:17 compute-0 sudo[62510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvsqldzzzjgldmanaolmcxjwnocubrnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020476.5274105-91-263301206387927/AnsiballZ_file.py'
Nov 24 21:41:17 compute-0 sudo[62510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:17 compute-0 python3.9[62512]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:41:17 compute-0 sudo[62510]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:18 compute-0 sudo[62662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxpuhyioiufcryeepzjofcrarroawnsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020477.937765-104-211137109265211/AnsiballZ_ini_file.py'
Nov 24 21:41:18 compute-0 sudo[62662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:18 compute-0 python3.9[62664]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:41:18 compute-0 sudo[62662]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:19 compute-0 sudo[62814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsrctornixwuneckrkhwwgfocnlrmhel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020478.878932-104-170070320974857/AnsiballZ_ini_file.py'
Nov 24 21:41:19 compute-0 sudo[62814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:19 compute-0 python3.9[62816]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:41:19 compute-0 sudo[62814]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:20 compute-0 sudo[62966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydvrazgbdikxnjbvxmqmwgkyppadteer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020479.795348-104-91854694080826/AnsiballZ_ini_file.py'
Nov 24 21:41:20 compute-0 sudo[62966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:20 compute-0 python3.9[62968]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:41:20 compute-0 sudo[62966]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:20 compute-0 sudo[63118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xujgsufprrnaukuozazeluibuttckvae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020480.5950313-104-110590713435608/AnsiballZ_ini_file.py'
Nov 24 21:41:20 compute-0 sudo[63118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:21 compute-0 python3.9[63120]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:41:21 compute-0 sudo[63118]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:21 compute-0 sudo[63270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuxlcirmdqhfewxrqmmawrnduvpzywwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020481.4593096-135-69603068050346/AnsiballZ_dnf.py'
Nov 24 21:41:21 compute-0 sudo[63270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:22 compute-0 python3.9[63272]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:41:23 compute-0 sudo[63270]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:24 compute-0 sudo[63423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndvoztgjgrfdrptqbicppzifcmsazcjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020483.7523541-146-151143071184164/AnsiballZ_setup.py'
Nov 24 21:41:24 compute-0 sudo[63423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:24 compute-0 python3.9[63425]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:41:24 compute-0 sudo[63423]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:24 compute-0 sudo[63577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iancvyyvtpehxadynpywjxbepqydeckr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020484.5725358-154-257284181278096/AnsiballZ_stat.py'
Nov 24 21:41:24 compute-0 sudo[63577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:25 compute-0 python3.9[63579]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:41:25 compute-0 sudo[63577]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:25 compute-0 sudo[63729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdncdppjavtjmjvfzchqtvwnlnfsprxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020485.3605416-163-13788309564951/AnsiballZ_stat.py'
Nov 24 21:41:25 compute-0 sudo[63729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:25 compute-0 python3.9[63731]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:41:25 compute-0 sudo[63729]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:26 compute-0 sudo[63881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qouyjqywtbdnueeoundwuicopkqgdvyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020486.2227998-173-24530515474541/AnsiballZ_command.py'
Nov 24 21:41:26 compute-0 sudo[63881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:26 compute-0 python3.9[63883]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:41:26 compute-0 sudo[63881]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:27 compute-0 sudo[64034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wllofeftfpcpznndsikxwxftxmmgwazv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020487.1694334-183-11450901677991/AnsiballZ_service_facts.py'
Nov 24 21:41:27 compute-0 sudo[64034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:27 compute-0 python3.9[64036]: ansible-service_facts Invoked
Nov 24 21:41:27 compute-0 network[64053]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 21:41:27 compute-0 network[64054]: 'network-scripts' will be removed from distribution in near future.
Nov 24 21:41:27 compute-0 network[64055]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 21:41:32 compute-0 sudo[64034]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:33 compute-0 sudo[64338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuetfyfdbbzvqdpjdqdfmhahditlmayy ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764020493.3303337-198-32954802647822/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764020493.3303337-198-32954802647822/args'
Nov 24 21:41:33 compute-0 sudo[64338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:33 compute-0 sudo[64338]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:34 compute-0 sshd-session[64368]: Connection closed by 193.32.162.145 port 51324
Nov 24 21:41:34 compute-0 sudo[64506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cecsgjldpzmtvhilpjdqtzsisoyqoiwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020494.2312112-209-22283348871348/AnsiballZ_dnf.py'
Nov 24 21:41:34 compute-0 sudo[64506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:34 compute-0 python3.9[64508]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:41:36 compute-0 sudo[64506]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:37 compute-0 sudo[64659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjxvddeqwaqavijqtuipwwvubplqrmdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020496.474976-222-36887494969752/AnsiballZ_package_facts.py'
Nov 24 21:41:37 compute-0 sudo[64659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:37 compute-0 python3.9[64661]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 24 21:41:37 compute-0 sudo[64659]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:38 compute-0 sudo[64811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzheexqvezedrlktygclofcwjvfbwedu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020498.2738702-232-126688098583716/AnsiballZ_stat.py'
Nov 24 21:41:38 compute-0 sudo[64811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:38 compute-0 python3.9[64813]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:41:38 compute-0 sudo[64811]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:39 compute-0 sudo[64936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipcjzjdjvbsdlxzybeprstaokhsxfyih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020498.2738702-232-126688098583716/AnsiballZ_copy.py'
Nov 24 21:41:39 compute-0 sudo[64936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:39 compute-0 python3.9[64938]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764020498.2738702-232-126688098583716/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:41:39 compute-0 sudo[64936]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:40 compute-0 sudo[65090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgwzhyhghjkbprawfpuetrnjwavxnxde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020499.9987478-247-102632594422812/AnsiballZ_stat.py'
Nov 24 21:41:40 compute-0 sudo[65090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:40 compute-0 python3.9[65092]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:41:40 compute-0 sudo[65090]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:41 compute-0 sudo[65215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rthtqpqykfokzfvtahzvowggohwcqiir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020499.9987478-247-102632594422812/AnsiballZ_copy.py'
Nov 24 21:41:41 compute-0 sudo[65215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:41 compute-0 python3.9[65217]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764020499.9987478-247-102632594422812/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:41:41 compute-0 sudo[65215]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:42 compute-0 sudo[65369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esqgogowddfvowrnjcwljiwfwbkwhfnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020501.9153564-268-137210834585831/AnsiballZ_lineinfile.py'
Nov 24 21:41:42 compute-0 sudo[65369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:42 compute-0 python3.9[65371]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:41:42 compute-0 sudo[65369]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:43 compute-0 sudo[65523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lspsnfohmhmadmqmgjvbtwcmkpqmluqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020503.2959573-283-131070539150552/AnsiballZ_setup.py'
Nov 24 21:41:43 compute-0 sudo[65523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:43 compute-0 python3.9[65525]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:41:44 compute-0 sudo[65523]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:44 compute-0 sudo[65607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnvxcgyfjhquptxeozhcyxpkcwokmlyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020503.2959573-283-131070539150552/AnsiballZ_systemd.py'
Nov 24 21:41:44 compute-0 sudo[65607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:45 compute-0 python3.9[65609]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:41:45 compute-0 sudo[65607]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:46 compute-0 sudo[65761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxxregsajbwtpfgcoqcgfpghkbgpijel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020505.70857-299-256410508743876/AnsiballZ_setup.py'
Nov 24 21:41:46 compute-0 sudo[65761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:46 compute-0 python3.9[65763]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:41:46 compute-0 sudo[65761]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:46 compute-0 sudo[65845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wseexbelcdvcrltxslmilsrvvtzkkytc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020505.70857-299-256410508743876/AnsiballZ_systemd.py'
Nov 24 21:41:46 compute-0 sudo[65845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:47 compute-0 python3.9[65847]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:41:47 compute-0 chronyd[805]: chronyd exiting
Nov 24 21:41:47 compute-0 systemd[1]: Stopping NTP client/server...
Nov 24 21:41:47 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 24 21:41:47 compute-0 systemd[1]: Stopped NTP client/server.
Nov 24 21:41:47 compute-0 systemd[1]: Starting NTP client/server...
Nov 24 21:41:47 compute-0 chronyd[65855]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 21:41:47 compute-0 chronyd[65855]: Frequency -31.492 +/- 0.515 ppm read from /var/lib/chrony/drift
Nov 24 21:41:47 compute-0 chronyd[65855]: Loaded seccomp filter (level 2)
Nov 24 21:41:47 compute-0 systemd[1]: Started NTP client/server.
Nov 24 21:41:47 compute-0 sudo[65845]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:47 compute-0 sshd-session[60998]: Connection closed by 192.168.122.30 port 50100
Nov 24 21:41:47 compute-0 sshd-session[60995]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:41:47 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 24 21:41:47 compute-0 systemd[1]: session-14.scope: Consumed 29.378s CPU time.
Nov 24 21:41:47 compute-0 systemd-logind[806]: Session 14 logged out. Waiting for processes to exit.
Nov 24 21:41:47 compute-0 systemd-logind[806]: Removed session 14.
Nov 24 21:41:53 compute-0 sshd-session[65881]: Accepted publickey for zuul from 192.168.122.30 port 44560 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:41:53 compute-0 systemd-logind[806]: New session 15 of user zuul.
Nov 24 21:41:53 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 24 21:41:53 compute-0 sshd-session[65881]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:41:55 compute-0 sshd-session[66016]: Invalid user ubuntu from 45.148.10.240 port 42880
Nov 24 21:41:55 compute-0 python3.9[66036]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:41:55 compute-0 sshd-session[66016]: Connection closed by invalid user ubuntu 45.148.10.240 port 42880 [preauth]
Nov 24 21:41:56 compute-0 sudo[66190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gteifmvefmbktomfofrvgmoxbfyrcidi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020515.552508-33-88248976989971/AnsiballZ_file.py'
Nov 24 21:41:56 compute-0 sudo[66190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:56 compute-0 python3.9[66192]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:41:56 compute-0 sudo[66190]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:57 compute-0 sudo[66365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmgiavtyuufabuqrhkdcjslawjplqmlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020516.5081465-41-116373572387799/AnsiballZ_stat.py'
Nov 24 21:41:57 compute-0 sudo[66365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:57 compute-0 python3.9[66367]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:41:57 compute-0 sudo[66365]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:57 compute-0 sudo[66443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsqsmjgvjeqekeevwdsdvbkzcyuccpmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020516.5081465-41-116373572387799/AnsiballZ_file.py'
Nov 24 21:41:57 compute-0 sudo[66443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:57 compute-0 python3.9[66445]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.5sc6d68i recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:41:57 compute-0 sudo[66443]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:58 compute-0 sudo[66595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ondgcfqriaiqcvpivgfoynodnmenavac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020518.5136957-61-183461503420134/AnsiballZ_stat.py'
Nov 24 21:41:58 compute-0 sudo[66595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:59 compute-0 python3.9[66597]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:41:59 compute-0 sudo[66595]: pam_unix(sudo:session): session closed for user root
Nov 24 21:41:59 compute-0 sudo[66718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbphccfxorvsthlsslmxycluitonlggn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020518.5136957-61-183461503420134/AnsiballZ_copy.py'
Nov 24 21:41:59 compute-0 sudo[66718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:41:59 compute-0 python3.9[66720]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764020518.5136957-61-183461503420134/.source _original_basename=.lqdkfr7j follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:41:59 compute-0 sudo[66718]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:00 compute-0 sudo[66870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giclnwnmgfvartqcitxrjambrqqyswvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020520.1126208-77-120275565112231/AnsiballZ_file.py'
Nov 24 21:42:00 compute-0 sudo[66870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:00 compute-0 python3.9[66872]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:42:00 compute-0 sudo[66870]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:01 compute-0 sudo[67022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrxsnbqsvndojyjvbqnwthnosnfptxfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020520.9176817-85-182183527465551/AnsiballZ_stat.py'
Nov 24 21:42:01 compute-0 sudo[67022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:01 compute-0 python3.9[67024]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:01 compute-0 sudo[67022]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:02 compute-0 sudo[67145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxaolrgzpmjzjsxzfaxykekageevoehk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020520.9176817-85-182183527465551/AnsiballZ_copy.py'
Nov 24 21:42:02 compute-0 sudo[67145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:02 compute-0 python3.9[67147]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764020520.9176817-85-182183527465551/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:42:02 compute-0 sudo[67145]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:02 compute-0 sudo[67297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hethrqmpvlvgltnsvcmynmlliaulbqin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020522.4356713-85-43518604847241/AnsiballZ_stat.py'
Nov 24 21:42:02 compute-0 sudo[67297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:03 compute-0 python3.9[67299]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:03 compute-0 sudo[67297]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:03 compute-0 sudo[67420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xacktncssnflbhddjjwmuhyprebnnapz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020522.4356713-85-43518604847241/AnsiballZ_copy.py'
Nov 24 21:42:03 compute-0 sudo[67420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:03 compute-0 python3.9[67422]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764020522.4356713-85-43518604847241/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:42:03 compute-0 sudo[67420]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:04 compute-0 sudo[67572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryykcuetqjszmxtiwfjjvuavpihngcyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020523.8270316-114-92602040564997/AnsiballZ_file.py'
Nov 24 21:42:04 compute-0 sudo[67572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:04 compute-0 python3.9[67574]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:04 compute-0 sudo[67572]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:04 compute-0 sudo[67724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfdffetbmtazudoyqslyifvmmlugzqqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020524.550966-122-205932604172900/AnsiballZ_stat.py'
Nov 24 21:42:04 compute-0 sudo[67724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:05 compute-0 python3.9[67726]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:05 compute-0 sudo[67724]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:05 compute-0 sudo[67847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfgbspxrfomnizovnbaomejvwcwqwtgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020524.550966-122-205932604172900/AnsiballZ_copy.py'
Nov 24 21:42:05 compute-0 sudo[67847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:05 compute-0 python3.9[67849]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020524.550966-122-205932604172900/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:05 compute-0 sudo[67847]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:06 compute-0 sudo[67999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vblqfiojysljeafzosgperpmunppltby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020525.9751625-137-266502028034844/AnsiballZ_stat.py'
Nov 24 21:42:06 compute-0 sudo[67999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:06 compute-0 python3.9[68001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:06 compute-0 sudo[67999]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:07 compute-0 sudo[68122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddowlgbizsfrufeygnweejccikhvhowb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020525.9751625-137-266502028034844/AnsiballZ_copy.py'
Nov 24 21:42:07 compute-0 sudo[68122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:07 compute-0 python3.9[68124]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020525.9751625-137-266502028034844/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:07 compute-0 sudo[68122]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:08 compute-0 sudo[68274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbjfagpjawyfprbkljcurxdpujgbffrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020527.6290371-152-139118201829554/AnsiballZ_systemd.py'
Nov 24 21:42:08 compute-0 sudo[68274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:08 compute-0 python3.9[68276]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:42:08 compute-0 systemd[1]: Reloading.
Nov 24 21:42:08 compute-0 systemd-rc-local-generator[68306]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:42:08 compute-0 systemd-sysv-generator[68310]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:42:08 compute-0 systemd[1]: Reloading.
Nov 24 21:42:08 compute-0 systemd-rc-local-generator[68341]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:42:08 compute-0 systemd-sysv-generator[68345]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:42:09 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 24 21:42:09 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 24 21:42:09 compute-0 sudo[68274]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:09 compute-0 sudo[68502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kstdokqhxwzifeszahdliatrvdwanvun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020529.3495955-160-275492456354125/AnsiballZ_stat.py'
Nov 24 21:42:09 compute-0 sudo[68502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:09 compute-0 python3.9[68504]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:09 compute-0 sudo[68502]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:10 compute-0 sudo[68625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odishavwgvudkkfdsorwgwonvnkffymv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020529.3495955-160-275492456354125/AnsiballZ_copy.py'
Nov 24 21:42:10 compute-0 sudo[68625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:10 compute-0 python3.9[68627]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020529.3495955-160-275492456354125/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:10 compute-0 sudo[68625]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:11 compute-0 sudo[68777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkdtukdrplitgyzpwjlxwglwghbnxgdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020530.817843-175-79258222943687/AnsiballZ_stat.py'
Nov 24 21:42:11 compute-0 sudo[68777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:11 compute-0 python3.9[68779]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:11 compute-0 sudo[68777]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:11 compute-0 sudo[68900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttjgynnufeckuyauvudjjemhsnawjmmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020530.817843-175-79258222943687/AnsiballZ_copy.py'
Nov 24 21:42:11 compute-0 sudo[68900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:11 compute-0 python3.9[68902]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020530.817843-175-79258222943687/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:11 compute-0 sudo[68900]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:12 compute-0 sudo[69052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnoecvaqrlrcryutcwhmdoecouwxlamh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020532.1381743-190-247686940160205/AnsiballZ_systemd.py'
Nov 24 21:42:12 compute-0 sudo[69052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:12 compute-0 python3.9[69054]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:42:12 compute-0 systemd[1]: Reloading.
Nov 24 21:42:12 compute-0 systemd-rc-local-generator[69078]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:42:12 compute-0 systemd-sysv-generator[69084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:42:13 compute-0 systemd[1]: Reloading.
Nov 24 21:42:13 compute-0 systemd-sysv-generator[69121]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:42:13 compute-0 systemd-rc-local-generator[69117]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:42:13 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 21:42:13 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 21:42:13 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 21:42:13 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 21:42:13 compute-0 sudo[69052]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:14 compute-0 python3.9[69278]: ansible-ansible.builtin.service_facts Invoked
Nov 24 21:42:14 compute-0 network[69295]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 21:42:14 compute-0 network[69296]: 'network-scripts' will be removed from distribution in near future.
Nov 24 21:42:14 compute-0 network[69297]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 21:42:18 compute-0 sudo[69557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvhifzqzfdwnxjykavitnaxinhiqdxns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020538.1234276-206-90381193243169/AnsiballZ_systemd.py'
Nov 24 21:42:18 compute-0 sudo[69557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:18 compute-0 python3.9[69559]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:42:18 compute-0 systemd[1]: Reloading.
Nov 24 21:42:19 compute-0 systemd-rc-local-generator[69590]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:42:19 compute-0 systemd-sysv-generator[69593]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:42:19 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 24 21:42:19 compute-0 iptables.init[69600]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 24 21:42:19 compute-0 iptables.init[69600]: iptables: Flushing firewall rules: [  OK  ]
Nov 24 21:42:19 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 24 21:42:19 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 24 21:42:19 compute-0 sudo[69557]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:20 compute-0 sudo[69794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzgxsjufixzsqcmwnuddfwfbmosugiol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020539.801139-206-243360003621274/AnsiballZ_systemd.py'
Nov 24 21:42:20 compute-0 sudo[69794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:20 compute-0 python3.9[69796]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:42:20 compute-0 sudo[69794]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:21 compute-0 sudo[69948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wksitzycludyfytygfhmtvpoullvojyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020540.9302373-222-217429788077202/AnsiballZ_systemd.py'
Nov 24 21:42:21 compute-0 sudo[69948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:21 compute-0 python3.9[69950]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:42:21 compute-0 systemd[1]: Reloading.
Nov 24 21:42:21 compute-0 systemd-sysv-generator[69979]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:42:21 compute-0 systemd-rc-local-generator[69976]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:42:21 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 24 21:42:21 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 24 21:42:21 compute-0 sudo[69948]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:22 compute-0 sudo[70140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjdjzmwgefcytzbbojgcumpbtuicuzhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020542.1941545-230-62711354146257/AnsiballZ_command.py'
Nov 24 21:42:22 compute-0 sudo[70140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:22 compute-0 python3.9[70142]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:42:22 compute-0 sudo[70140]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:23 compute-0 sudo[70293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pshcxckfdxezudwihoioefqgydsukfme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020543.5231786-244-99813444542142/AnsiballZ_stat.py'
Nov 24 21:42:23 compute-0 sudo[70293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:24 compute-0 python3.9[70295]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:24 compute-0 sudo[70293]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:24 compute-0 sudo[70418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pywagabqdpbljxxhrqndmpduawwrbpow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020543.5231786-244-99813444542142/AnsiballZ_copy.py'
Nov 24 21:42:24 compute-0 sudo[70418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:24 compute-0 python3.9[70420]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764020543.5231786-244-99813444542142/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:24 compute-0 sudo[70418]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:25 compute-0 sudo[70571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leshdxeqwmdoksvscwbwgzqgpinooxdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020545.1645546-259-117826175345390/AnsiballZ_systemd.py'
Nov 24 21:42:25 compute-0 sudo[70571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:25 compute-0 python3.9[70573]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:42:25 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 24 21:42:25 compute-0 sshd[1010]: Received SIGHUP; restarting.
Nov 24 21:42:25 compute-0 sshd[1010]: Server listening on 0.0.0.0 port 22.
Nov 24 21:42:25 compute-0 sshd[1010]: Server listening on :: port 22.
Nov 24 21:42:25 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 24 21:42:25 compute-0 sudo[70571]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:26 compute-0 sudo[70727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsxfiupzhdxhusnkuvqqboxycucrvizc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020546.1914542-267-28091293128188/AnsiballZ_file.py'
Nov 24 21:42:26 compute-0 sudo[70727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:26 compute-0 python3.9[70729]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:26 compute-0 sudo[70727]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:27 compute-0 sudo[70879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sisflpordcmjfvssyxpnxoltcglcdaou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020547.0332086-275-12605343921792/AnsiballZ_stat.py'
Nov 24 21:42:27 compute-0 sudo[70879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:27 compute-0 python3.9[70881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:27 compute-0 sudo[70879]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:28 compute-0 sudo[71002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glemlimzaxqdnsbnkxktdmlgclttxpgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020547.0332086-275-12605343921792/AnsiballZ_copy.py'
Nov 24 21:42:28 compute-0 sudo[71002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:28 compute-0 python3.9[71004]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020547.0332086-275-12605343921792/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:28 compute-0 sudo[71002]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:29 compute-0 sudo[71154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsxfjpbmuvjpissyfmrxytuwfwmbklbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020548.7596407-293-138073214161856/AnsiballZ_timezone.py'
Nov 24 21:42:29 compute-0 sudo[71154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:29 compute-0 python3.9[71156]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 21:42:29 compute-0 systemd[1]: Starting Time & Date Service...
Nov 24 21:42:29 compute-0 systemd[1]: Started Time & Date Service.
Nov 24 21:42:29 compute-0 sudo[71154]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:30 compute-0 sudo[71310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbpmvxxonhhudryxsopcdopseexwwpmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020549.9119685-302-61626176646503/AnsiballZ_file.py'
Nov 24 21:42:30 compute-0 sudo[71310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:30 compute-0 python3.9[71312]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:30 compute-0 sudo[71310]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:31 compute-0 sudo[71462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfkuzjyrfuoylhuwmitmxsdpxflbbchu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020550.7789168-310-48252736146920/AnsiballZ_stat.py'
Nov 24 21:42:31 compute-0 sudo[71462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:31 compute-0 python3.9[71464]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:31 compute-0 sudo[71462]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:31 compute-0 sudo[71585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifwwnjclhzgdpbvlbpgfvfmblznttrii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020550.7789168-310-48252736146920/AnsiballZ_copy.py'
Nov 24 21:42:31 compute-0 sudo[71585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:31 compute-0 python3.9[71587]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764020550.7789168-310-48252736146920/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:31 compute-0 sudo[71585]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:32 compute-0 sudo[71737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwkrreqqlqaojuzefqteggfrgvrhoxuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020552.1634586-325-262153645975911/AnsiballZ_stat.py'
Nov 24 21:42:32 compute-0 sudo[71737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:32 compute-0 python3.9[71739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:32 compute-0 sudo[71737]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:33 compute-0 sudo[71860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciiwnckztznsmpgkfmrrkciespdfztbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020552.1634586-325-262153645975911/AnsiballZ_copy.py'
Nov 24 21:42:33 compute-0 sudo[71860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:33 compute-0 python3.9[71862]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764020552.1634586-325-262153645975911/.source.yaml _original_basename=.oj3eyvdx follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:33 compute-0 sudo[71860]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:33 compute-0 sudo[72012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srpjpntqhjsmoaedwctsuefenkadmyeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020553.5718904-340-152287425326182/AnsiballZ_stat.py'
Nov 24 21:42:33 compute-0 sudo[72012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:34 compute-0 python3.9[72014]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:34 compute-0 sudo[72012]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:34 compute-0 sudo[72135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhtrwoqjobwirhejbelcfmsnobrwfygk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020553.5718904-340-152287425326182/AnsiballZ_copy.py'
Nov 24 21:42:34 compute-0 sudo[72135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:34 compute-0 python3.9[72137]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020553.5718904-340-152287425326182/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:35 compute-0 sudo[72135]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:35 compute-0 sudo[72287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etjchuhlmrhkunxnvhfdkmcapkpnsebi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020555.1896915-355-240179382888456/AnsiballZ_command.py'
Nov 24 21:42:35 compute-0 sudo[72287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:35 compute-0 python3.9[72289]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:42:35 compute-0 sudo[72287]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:36 compute-0 sudo[72440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oynzclddrpbmzqdhimenqqvhqyhyndql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020555.9863422-363-186614387777567/AnsiballZ_command.py'
Nov 24 21:42:36 compute-0 sudo[72440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:36 compute-0 python3.9[72442]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:42:36 compute-0 sudo[72440]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:37 compute-0 sudo[72593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbxgwhivyiasqsxplkygirffuubdhizz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764020556.8401074-371-47600456681282/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 21:42:37 compute-0 sudo[72593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:37 compute-0 python3[72595]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 21:42:37 compute-0 sudo[72593]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:38 compute-0 sudo[72745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqldkiwnzmmsvbntccxssojckpavmref ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020557.8586817-379-252320141874505/AnsiballZ_stat.py'
Nov 24 21:42:38 compute-0 sudo[72745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:38 compute-0 python3.9[72747]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:38 compute-0 sudo[72745]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:38 compute-0 sudo[72868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdraxeurrxgfpohmdgxyuakhxwbrgxwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020557.8586817-379-252320141874505/AnsiballZ_copy.py'
Nov 24 21:42:38 compute-0 sudo[72868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:39 compute-0 python3.9[72870]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020557.8586817-379-252320141874505/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:39 compute-0 sudo[72868]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:39 compute-0 sudo[73020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkqvcivuhpvhzumjzgrrpeczznnmswam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020559.3006163-394-8938704730697/AnsiballZ_stat.py'
Nov 24 21:42:39 compute-0 sudo[73020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:39 compute-0 python3.9[73022]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:39 compute-0 sudo[73020]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:40 compute-0 sudo[73143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znaiyfjahwrvfypsuiigwtspndhxlisu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020559.3006163-394-8938704730697/AnsiballZ_copy.py'
Nov 24 21:42:40 compute-0 sudo[73143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:40 compute-0 python3.9[73145]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020559.3006163-394-8938704730697/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:40 compute-0 sudo[73143]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:41 compute-0 sudo[73295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djjdyiwhyvfsnbevcchahluthwdetzjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020560.6564584-409-174067811393987/AnsiballZ_stat.py'
Nov 24 21:42:41 compute-0 sudo[73295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:41 compute-0 python3.9[73297]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:41 compute-0 sudo[73295]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:41 compute-0 sudo[73418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwkdebrnbfchzqwibhizsjsyemmbmbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020560.6564584-409-174067811393987/AnsiballZ_copy.py'
Nov 24 21:42:41 compute-0 sudo[73418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:42 compute-0 python3.9[73420]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020560.6564584-409-174067811393987/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:42 compute-0 sudo[73418]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:42 compute-0 sudo[73570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdsaecrtcqgshuncmwexqvoykccvlyyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020562.2339158-424-25216220715493/AnsiballZ_stat.py'
Nov 24 21:42:42 compute-0 sudo[73570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:42 compute-0 python3.9[73572]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:42 compute-0 sudo[73570]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:43 compute-0 sudo[73693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tivcjwdqtkquvqlyuboistxkleumpaqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020562.2339158-424-25216220715493/AnsiballZ_copy.py'
Nov 24 21:42:43 compute-0 sudo[73693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:43 compute-0 python3.9[73695]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020562.2339158-424-25216220715493/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:43 compute-0 sudo[73693]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:44 compute-0 sudo[73845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbcgjembmshhiezvwzzdszhnrttsimll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020563.7438436-439-25473882708880/AnsiballZ_stat.py'
Nov 24 21:42:44 compute-0 sudo[73845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:44 compute-0 python3.9[73847]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:42:44 compute-0 sudo[73845]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:44 compute-0 sudo[73968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgjvwfxsfqbohaijktecmxlefznofmmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020563.7438436-439-25473882708880/AnsiballZ_copy.py'
Nov 24 21:42:44 compute-0 sudo[73968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:45 compute-0 python3.9[73970]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020563.7438436-439-25473882708880/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:45 compute-0 sudo[73968]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:45 compute-0 sudo[74120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojezunbstwodtizfbhnoagdygepeuxqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020565.4356768-454-204510680841442/AnsiballZ_file.py'
Nov 24 21:42:45 compute-0 sudo[74120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:45 compute-0 python3.9[74122]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:46 compute-0 sudo[74120]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:46 compute-0 sudo[74272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llgnjvbqjcmfqbpcymspbifunpgsqfph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020566.2175372-462-78032898434110/AnsiballZ_command.py'
Nov 24 21:42:46 compute-0 sudo[74272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:46 compute-0 python3.9[74274]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:42:46 compute-0 sudo[74272]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:47 compute-0 sudo[74431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvwzudezeszgbizkvhwniggvvmltcnop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020567.0952435-470-142379250593988/AnsiballZ_blockinfile.py'
Nov 24 21:42:47 compute-0 sudo[74431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:47 compute-0 python3.9[74433]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:47 compute-0 sudo[74431]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:48 compute-0 sudo[74584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgrkhaznlwdeyqsubahbstxvdnssvagw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020568.2169497-479-88394207861132/AnsiballZ_file.py'
Nov 24 21:42:48 compute-0 sudo[74584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:48 compute-0 python3.9[74586]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:48 compute-0 sudo[74584]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:49 compute-0 sudo[74736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omjkonzbewgwvntkdtpqwvoysthnlqby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020569.0139196-479-259891722871545/AnsiballZ_file.py'
Nov 24 21:42:49 compute-0 sudo[74736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:49 compute-0 python3.9[74738]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:42:49 compute-0 sudo[74736]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:50 compute-0 sudo[74888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmqgdausjuwylrkjyjrcpjapiafilyfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020569.8042536-494-228833940880485/AnsiballZ_mount.py'
Nov 24 21:42:50 compute-0 sudo[74888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:50 compute-0 python3.9[74890]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 21:42:50 compute-0 sudo[74888]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:50 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:42:50 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:42:51 compute-0 sudo[75042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tasqmiqnjoujinrxmygvhzmhajrdjvas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020571.0455675-494-85708225092051/AnsiballZ_mount.py'
Nov 24 21:42:51 compute-0 sudo[75042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:51 compute-0 python3.9[75044]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 21:42:51 compute-0 sudo[75042]: pam_unix(sudo:session): session closed for user root
Nov 24 21:42:52 compute-0 sshd-session[65884]: Connection closed by 192.168.122.30 port 44560
Nov 24 21:42:52 compute-0 sshd-session[65881]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:42:52 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 24 21:42:52 compute-0 systemd[1]: session-15.scope: Consumed 42.568s CPU time.
Nov 24 21:42:52 compute-0 systemd-logind[806]: Session 15 logged out. Waiting for processes to exit.
Nov 24 21:42:52 compute-0 systemd-logind[806]: Removed session 15.
Nov 24 21:42:58 compute-0 sshd-session[75070]: Accepted publickey for zuul from 192.168.122.30 port 42472 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:42:59 compute-0 systemd-logind[806]: New session 16 of user zuul.
Nov 24 21:42:59 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 24 21:42:59 compute-0 sshd-session[75070]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:42:59 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 21:42:59 compute-0 sudo[75225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxqoplfnbxgctzzlftoofmpkkquzbweu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020579.1494427-16-188827978812539/AnsiballZ_tempfile.py'
Nov 24 21:42:59 compute-0 sudo[75225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:42:59 compute-0 python3.9[75227]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 24 21:42:59 compute-0 sudo[75225]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:00 compute-0 sudo[75377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puhalldffzwckzbkhgsdcmbyuvnqsede ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020580.1403527-28-50870060607352/AnsiballZ_stat.py'
Nov 24 21:43:00 compute-0 sudo[75377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:00 compute-0 python3.9[75379]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:43:00 compute-0 sudo[75377]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:02 compute-0 sudo[75529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eikdwhbsjjgrpqivznqqydhhcwujcmmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020581.327744-38-46167573186252/AnsiballZ_setup.py'
Nov 24 21:43:02 compute-0 sudo[75529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:02 compute-0 python3.9[75531]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:43:02 compute-0 sudo[75529]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:03 compute-0 sudo[75681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnbnvkmxsvzmriqalushlmvsobmqxehs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020582.6642823-47-215279866142756/AnsiballZ_blockinfile.py'
Nov 24 21:43:03 compute-0 sudo[75681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:03 compute-0 python3.9[75683]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDvZrSf+JUpm4SKJPdwLCMF//8o17tJhyLkSWet/s0+bJvf9j/sTJv+iZMQVIVnX/9AgDAN5nrfxnvFAOSH8b3Vo6tiFs/HYIq5z50edbd4O3oYvXYNm3jj+LSyAxP7Sf6VEvJ9/O5yO/ZyGE81/+svtleVPVQN/Y60a+n1eSX1lgzxyAnds/OZkb+ymgMZ4XdYwaqaNrNk0icX+aCfh2gqPWo6dCmQXZmP5jo7XOgHDgxU7VDjjcy7ZIYCyrrdnMJU0bxrq6BKv82028XO3I7l8YjsCcWdrzHaGFNE20K2iNjCbhx8TTxNLriBgrrY1D/cvAeEf0bbxhiGVflm2pAE80TfM0PAgH575RsR/uN7gqlJDF6FDj2/q1Qx7bi9ceMIBx0YLP4rt7WGJzj67Hd9jNpucCYzFmkDcFiNblWZ+AmW02Leqf75hpwhjuN3NaazSgG4Hv2x64wgd+mq3avQiTBDTOz2ZtbbHPTCrBYW88C3Anfriup/AyU93MuyvyE=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHuH+T12nLcJumEIph1Re5cLNbn1SV/SWxxE6O+9w9+K
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGVi9ULN3gxrmghEy+RoFh9V3W8Ww2qThn4dW/X4iuhXiddaBZLEfEPlHfXtfS06SQGvcS0/M6O8fDgfuKlA0YA=
                                             create=True mode=0644 path=/tmp/ansible.i5jmhnvx state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:43:03 compute-0 sudo[75681]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:04 compute-0 sudo[75833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bymcdqouvolwgfwpfvfuuiofcrmyzmig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020583.6847546-55-103293499062450/AnsiballZ_command.py'
Nov 24 21:43:04 compute-0 sudo[75833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:04 compute-0 python3.9[75835]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.i5jmhnvx' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:43:04 compute-0 sudo[75833]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:05 compute-0 sudo[75987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmgxwnalquethihinifbzebufbxulncn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020584.6859038-63-225929025500558/AnsiballZ_file.py'
Nov 24 21:43:05 compute-0 sudo[75987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:05 compute-0 python3.9[75989]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.i5jmhnvx state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:43:05 compute-0 sudo[75987]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:05 compute-0 sshd-session[75073]: Connection closed by 192.168.122.30 port 42472
Nov 24 21:43:05 compute-0 sshd-session[75070]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:43:05 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 24 21:43:05 compute-0 systemd[1]: session-16.scope: Consumed 4.213s CPU time.
Nov 24 21:43:05 compute-0 systemd-logind[806]: Session 16 logged out. Waiting for processes to exit.
Nov 24 21:43:05 compute-0 systemd-logind[806]: Removed session 16.
Nov 24 21:43:11 compute-0 sshd-session[76014]: Accepted publickey for zuul from 192.168.122.30 port 52932 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:43:11 compute-0 systemd-logind[806]: New session 17 of user zuul.
Nov 24 21:43:11 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 24 21:43:11 compute-0 sshd-session[76014]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:43:12 compute-0 python3.9[76167]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:43:13 compute-0 sudo[76321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lekqzrgnxbzdnkrxaiawdaswdyzvlnbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020592.8510141-32-170302798302435/AnsiballZ_systemd.py'
Nov 24 21:43:13 compute-0 sudo[76321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:13 compute-0 python3.9[76323]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 21:43:13 compute-0 sudo[76321]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:14 compute-0 sudo[76475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqmnigjjmjndkvadyilrjycfwuzpvlht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020594.1474504-40-87071520573378/AnsiballZ_systemd.py'
Nov 24 21:43:14 compute-0 sudo[76475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:14 compute-0 python3.9[76477]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:43:14 compute-0 sudo[76475]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:15 compute-0 sudo[76628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwtzfqbmgzsdrsifhrjsbsdoyalwllhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020595.1648834-49-175171974989161/AnsiballZ_command.py'
Nov 24 21:43:15 compute-0 sudo[76628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:15 compute-0 python3.9[76630]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:43:15 compute-0 sudo[76628]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:16 compute-0 sudo[76781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jersrbdulwxedrmvptntzvdtxitdgbyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020596.2020497-57-186678448453751/AnsiballZ_stat.py'
Nov 24 21:43:16 compute-0 sudo[76781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:16 compute-0 python3.9[76783]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:43:16 compute-0 sudo[76781]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:17 compute-0 sudo[76935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blzhncrzjpzfcfquzfpzpnxbbqqjcwjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020597.176263-65-48163551814358/AnsiballZ_command.py'
Nov 24 21:43:17 compute-0 sudo[76935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:17 compute-0 python3.9[76937]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:43:18 compute-0 sudo[76935]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:18 compute-0 sudo[77090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzcnilsjxhqhivtrluscjycmnmcntxut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020598.224832-73-34131716096868/AnsiballZ_file.py'
Nov 24 21:43:18 compute-0 sudo[77090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:18 compute-0 python3.9[77092]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:43:18 compute-0 sudo[77090]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:19 compute-0 sshd-session[76017]: Connection closed by 192.168.122.30 port 52932
Nov 24 21:43:19 compute-0 sshd-session[76014]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:43:19 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 24 21:43:19 compute-0 systemd[1]: session-17.scope: Consumed 5.411s CPU time.
Nov 24 21:43:19 compute-0 systemd-logind[806]: Session 17 logged out. Waiting for processes to exit.
Nov 24 21:43:19 compute-0 systemd-logind[806]: Removed session 17.
Nov 24 21:43:25 compute-0 sshd-session[77117]: Accepted publickey for zuul from 192.168.122.30 port 44778 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:43:25 compute-0 systemd-logind[806]: New session 18 of user zuul.
Nov 24 21:43:25 compute-0 systemd[1]: Started Session 18 of User zuul.
Nov 24 21:43:25 compute-0 sshd-session[77117]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:43:26 compute-0 python3.9[77270]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:43:27 compute-0 sudo[77424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdkkovtlbvpnrtqggaqymhxpstvqcjgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020607.2983115-34-8205985943340/AnsiballZ_setup.py'
Nov 24 21:43:27 compute-0 sudo[77424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:28 compute-0 python3.9[77426]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:43:28 compute-0 sudo[77424]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:28 compute-0 sudo[77508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtustshmgzjuwneboftyizbkryvyrphi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020607.2983115-34-8205985943340/AnsiballZ_dnf.py'
Nov 24 21:43:28 compute-0 sudo[77508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:29 compute-0 python3.9[77510]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 21:43:30 compute-0 sudo[77508]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:31 compute-0 python3.9[77661]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:43:32 compute-0 python3.9[77812]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 21:43:33 compute-0 python3.9[77962]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:43:34 compute-0 python3.9[78112]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:43:35 compute-0 sshd-session[77120]: Connection closed by 192.168.122.30 port 44778
Nov 24 21:43:35 compute-0 sshd-session[77117]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:43:35 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 24 21:43:35 compute-0 systemd[1]: session-18.scope: Consumed 6.616s CPU time.
Nov 24 21:43:35 compute-0 systemd-logind[806]: Session 18 logged out. Waiting for processes to exit.
Nov 24 21:43:35 compute-0 systemd-logind[806]: Removed session 18.
Nov 24 21:43:42 compute-0 sshd-session[78137]: Accepted publickey for zuul from 192.168.122.30 port 51174 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:43:42 compute-0 systemd-logind[806]: New session 19 of user zuul.
Nov 24 21:43:42 compute-0 systemd[1]: Started Session 19 of User zuul.
Nov 24 21:43:42 compute-0 sshd-session[78137]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:43:43 compute-0 python3.9[78290]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:43:45 compute-0 sudo[78444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztrxnbgdkrioxrthphnrmfaaxejwuwjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020624.5156476-50-281450401726985/AnsiballZ_file.py'
Nov 24 21:43:45 compute-0 sudo[78444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:45 compute-0 python3.9[78446]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:43:45 compute-0 sudo[78444]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:45 compute-0 sudo[78596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxdylifjpqnopxnsiplkjwfperkxinic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020625.487319-50-84184152451369/AnsiballZ_file.py'
Nov 24 21:43:45 compute-0 sudo[78596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:46 compute-0 python3.9[78598]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:43:46 compute-0 sudo[78596]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:46 compute-0 sudo[78748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omkushbeshtwddvnrvqukiaifivssjlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020626.3043447-65-139770119288508/AnsiballZ_stat.py'
Nov 24 21:43:46 compute-0 sudo[78748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:47 compute-0 python3.9[78750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:43:47 compute-0 sudo[78748]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:47 compute-0 sudo[78871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxjbzanofevrrkiekalsmslfambbpndp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020626.3043447-65-139770119288508/AnsiballZ_copy.py'
Nov 24 21:43:47 compute-0 sudo[78871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:48 compute-0 python3.9[78873]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020626.3043447-65-139770119288508/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=1bd90cf8db0300b7266c6ed49be750acc321f780 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:43:48 compute-0 sudo[78871]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:48 compute-0 sudo[79023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfzcevbuhdorlrbbotdaupvgzenpamjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020628.2143362-65-149046942820171/AnsiballZ_stat.py'
Nov 24 21:43:48 compute-0 sudo[79023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:48 compute-0 python3.9[79025]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:43:48 compute-0 sudo[79023]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:49 compute-0 sudo[79146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzykrntumgunzamssbwjiaszbsqljfwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020628.2143362-65-149046942820171/AnsiballZ_copy.py'
Nov 24 21:43:49 compute-0 sudo[79146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:49 compute-0 python3.9[79148]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020628.2143362-65-149046942820171/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7947ddb8aa7ef62ba34bac1391c0ff2604ec8ef1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:43:49 compute-0 sudo[79146]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:49 compute-0 sudo[79298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwkopimcbojpuqkcfwjmtemajrsyosak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020629.66891-65-97453127868164/AnsiballZ_stat.py'
Nov 24 21:43:49 compute-0 sudo[79298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:50 compute-0 python3.9[79300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:43:50 compute-0 sudo[79298]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:50 compute-0 sudo[79421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdfrwbjpyqrllbhqoqxrafiaiqjavwju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020629.66891-65-97453127868164/AnsiballZ_copy.py'
Nov 24 21:43:50 compute-0 sudo[79421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:50 compute-0 python3.9[79423]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020629.66891-65-97453127868164/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f9185ffff2a307633a9f8d812a9d538cb342dd6f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:43:50 compute-0 sudo[79421]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:51 compute-0 sudo[79573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpvawiggdblzztphhlqmmpoorsvsfkdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020631.1213443-109-198306263634362/AnsiballZ_file.py'
Nov 24 21:43:51 compute-0 sudo[79573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:51 compute-0 python3.9[79575]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:43:51 compute-0 sudo[79573]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:52 compute-0 sudo[79725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwqkhswikavdbrsvpnpfjknwbbvilgto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020631.8103282-109-55140529167733/AnsiballZ_file.py'
Nov 24 21:43:52 compute-0 sudo[79725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:52 compute-0 python3.9[79727]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:43:52 compute-0 sudo[79725]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:52 compute-0 sudo[79877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rduoehdxicwynxjejlrrlafgynanrqiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020632.549571-124-230974033840697/AnsiballZ_stat.py'
Nov 24 21:43:52 compute-0 sudo[79877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:53 compute-0 python3.9[79879]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:43:53 compute-0 sudo[79877]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:53 compute-0 sudo[80000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkhsxtowglznujgewkkjzyhqeqvhncwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020632.549571-124-230974033840697/AnsiballZ_copy.py'
Nov 24 21:43:53 compute-0 sudo[80000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:53 compute-0 python3.9[80002]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020632.549571-124-230974033840697/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=67991510ff54a8139e1bf08fa4b32e8bbf47b043 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:43:53 compute-0 sudo[80000]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:54 compute-0 sudo[80152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tljbxxiecocictzwshrbvzsrsrjpxonq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020634.0117903-124-112192814759502/AnsiballZ_stat.py'
Nov 24 21:43:54 compute-0 sudo[80152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:54 compute-0 python3.9[80154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:43:54 compute-0 sudo[80152]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:55 compute-0 sudo[80275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxshbkkfvvfqbavejwnkeepzjzylpfwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020634.0117903-124-112192814759502/AnsiballZ_copy.py'
Nov 24 21:43:55 compute-0 sudo[80275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:55 compute-0 python3.9[80277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020634.0117903-124-112192814759502/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7947ddb8aa7ef62ba34bac1391c0ff2604ec8ef1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:43:55 compute-0 sudo[80275]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:55 compute-0 sudo[80427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohzvjbvusqltkuwoxxugdjsrdtwfemzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020635.5426497-124-156810226658629/AnsiballZ_stat.py'
Nov 24 21:43:55 compute-0 sudo[80427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:55 compute-0 python3.9[80429]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:43:56 compute-0 sudo[80427]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:56 compute-0 sudo[80550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqwrxdwuwgpdfozhaywgqfjmcuakjzpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020635.5426497-124-156810226658629/AnsiballZ_copy.py'
Nov 24 21:43:56 compute-0 sudo[80550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:56 compute-0 python3.9[80552]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020635.5426497-124-156810226658629/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d36eb00a8a10ce3fec46e97722bd5932b96a0471 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:43:56 compute-0 sudo[80550]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:57 compute-0 chronyd[65855]: Selected source 158.69.193.108 (pool.ntp.org)
Nov 24 21:43:57 compute-0 sudo[80702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htbptfxkcinqvtdptffxskwarfyhqewu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020636.9375677-168-182971604229173/AnsiballZ_file.py'
Nov 24 21:43:57 compute-0 sudo[80702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:57 compute-0 python3.9[80704]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:43:57 compute-0 sudo[80702]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:58 compute-0 sudo[80854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfzxdraspxqudlgyrgwgzmbeyvranjqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020637.7411966-168-250087720887621/AnsiballZ_file.py'
Nov 24 21:43:58 compute-0 sudo[80854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:58 compute-0 python3.9[80856]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:43:58 compute-0 sudo[80854]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:58 compute-0 sudo[81006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouayrqfimlyydvbuveikioeqhrtutprw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020638.5318723-183-267542538229285/AnsiballZ_stat.py'
Nov 24 21:43:58 compute-0 sudo[81006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:59 compute-0 python3.9[81008]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:43:59 compute-0 sudo[81006]: pam_unix(sudo:session): session closed for user root
Nov 24 21:43:59 compute-0 sudo[81129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pguvxdqngcoissfzgndcacyhrnqadbrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020638.5318723-183-267542538229285/AnsiballZ_copy.py'
Nov 24 21:43:59 compute-0 sudo[81129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:43:59 compute-0 python3.9[81131]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020638.5318723-183-267542538229285/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e3f8e4de55015ec47731c2ee89c60183aee6a65c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:43:59 compute-0 sudo[81129]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:00 compute-0 sudo[81281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agznzcqbapkyspjswhxdklcollerdaeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020639.8012805-183-185296097656131/AnsiballZ_stat.py'
Nov 24 21:44:00 compute-0 sudo[81281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:00 compute-0 python3.9[81283]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:00 compute-0 sudo[81281]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:00 compute-0 sudo[81404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkqdauczjfojcalwsvsilhakbbueexah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020639.8012805-183-185296097656131/AnsiballZ_copy.py'
Nov 24 21:44:00 compute-0 sudo[81404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:01 compute-0 python3.9[81406]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020639.8012805-183-185296097656131/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2b945c1541d65623462c4529ad284e7617d1c7f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:01 compute-0 sudo[81404]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:01 compute-0 sudo[81556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqhitelgsaalnjhaoioyqlbsvugyklwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020641.144031-183-85832600827122/AnsiballZ_stat.py'
Nov 24 21:44:01 compute-0 sudo[81556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:01 compute-0 python3.9[81558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:01 compute-0 sudo[81556]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:02 compute-0 sudo[81679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iturxkxvjpvynujzvsrqiirrybrfetcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020641.144031-183-85832600827122/AnsiballZ_copy.py'
Nov 24 21:44:02 compute-0 sudo[81679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:02 compute-0 python3.9[81681]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020641.144031-183-85832600827122/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d92c80746daa883ab1a21e091fe9c6c54025e097 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:02 compute-0 sudo[81679]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:02 compute-0 sshd-session[81686]: Invalid user ubuntu from 45.148.10.240 port 60178
Nov 24 21:44:02 compute-0 sshd-session[81686]: Connection closed by invalid user ubuntu 45.148.10.240 port 60178 [preauth]
Nov 24 21:44:02 compute-0 sudo[81833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvqgpnrsjcqaxqvzrrnampklqjhismas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020642.5836809-227-140039704552583/AnsiballZ_file.py'
Nov 24 21:44:02 compute-0 sudo[81833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:03 compute-0 python3.9[81835]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:03 compute-0 sudo[81833]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:03 compute-0 sudo[81985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcrzatkrdhtjvngztheritnetksqsxnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020643.362038-227-26865914617515/AnsiballZ_file.py'
Nov 24 21:44:03 compute-0 sudo[81985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:03 compute-0 python3.9[81987]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:03 compute-0 sudo[81985]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:04 compute-0 sudo[82137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qslednqbvklcauzrlawxkqqnjishhfto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020644.1141524-242-107232991235762/AnsiballZ_stat.py'
Nov 24 21:44:04 compute-0 sudo[82137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:04 compute-0 python3.9[82139]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:04 compute-0 sudo[82137]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:04 compute-0 sudo[82260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnlsoqaijrfbosbopgzabtkucsbjbmva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020644.1141524-242-107232991235762/AnsiballZ_copy.py'
Nov 24 21:44:04 compute-0 sudo[82260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:05 compute-0 python3.9[82262]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020644.1141524-242-107232991235762/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=521ca2ca31aa9485b2db2da481b04736be169aa3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:05 compute-0 sudo[82260]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:05 compute-0 sudo[82412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vukwetjewlnjdockxljmxnqulifguaki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020645.3627765-242-278230137122035/AnsiballZ_stat.py'
Nov 24 21:44:05 compute-0 sudo[82412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:05 compute-0 python3.9[82414]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:05 compute-0 sudo[82412]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:06 compute-0 sudo[82535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctnmtdjyoabsnpkqasfhnfcccwoaqgdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020645.3627765-242-278230137122035/AnsiballZ_copy.py'
Nov 24 21:44:06 compute-0 sudo[82535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:06 compute-0 python3.9[82537]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020645.3627765-242-278230137122035/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0dbbf8f1a5fc87f6a6aaf5e32340cd9c2f10204c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:06 compute-0 sudo[82535]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:07 compute-0 sudo[82687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vluscgqmkxufkdcmpumfwjqxyystrmqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020646.8257864-242-116604274072771/AnsiballZ_stat.py'
Nov 24 21:44:07 compute-0 sudo[82687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:07 compute-0 python3.9[82689]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:07 compute-0 sudo[82687]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:07 compute-0 sudo[82810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcppfnmzfriaiqckukucbdcahjqzjgdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020646.8257864-242-116604274072771/AnsiballZ_copy.py'
Nov 24 21:44:07 compute-0 sudo[82810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:08 compute-0 python3.9[82812]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020646.8257864-242-116604274072771/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=246e4b387836f495bc341b4cf6f0d507148d4115 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:08 compute-0 sudo[82810]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:08 compute-0 sudo[82962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iadfkpelisrnvsvltayxrxjbcetraync ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020648.3934178-286-62759985964955/AnsiballZ_file.py'
Nov 24 21:44:08 compute-0 sudo[82962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:09 compute-0 python3.9[82964]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:09 compute-0 sudo[82962]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:09 compute-0 sudo[83114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rclhitxzbenghjublhrtpchixedipjax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020649.1931732-286-204115063103302/AnsiballZ_file.py'
Nov 24 21:44:09 compute-0 sudo[83114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:09 compute-0 python3.9[83116]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:09 compute-0 sudo[83114]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:10 compute-0 sudo[83266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjdxnsihkopnzsmlvodxhyxocuqqkbsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020650.0375319-301-209562565561171/AnsiballZ_stat.py'
Nov 24 21:44:10 compute-0 sudo[83266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:10 compute-0 python3.9[83268]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:10 compute-0 sudo[83266]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:10 compute-0 sudo[83389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqyzlqbcnkeqpihjvhtyhuxektpntnhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020650.0375319-301-209562565561171/AnsiballZ_copy.py'
Nov 24 21:44:10 compute-0 sudo[83389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:11 compute-0 python3.9[83391]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020650.0375319-301-209562565561171/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6314da10a628de2682bcf89b26fba182d5b288de backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:11 compute-0 sudo[83389]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:11 compute-0 sudo[83543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiumvjqcylumxzuoystzvijpgcvgrkvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020651.3596468-301-144391902420993/AnsiballZ_stat.py'
Nov 24 21:44:11 compute-0 sudo[83543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:11 compute-0 sshd-session[83518]: Connection closed by 64.226.74.9 port 18789
Nov 24 21:44:11 compute-0 python3.9[83545]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:11 compute-0 sudo[83543]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:12 compute-0 sudo[83668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdovepplztsyqwmjxmdbizdwwbbmwspn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020651.3596468-301-144391902420993/AnsiballZ_copy.py'
Nov 24 21:44:12 compute-0 sudo[83668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:12 compute-0 python3.9[83670]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020651.3596468-301-144391902420993/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2b945c1541d65623462c4529ad284e7617d1c7f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:12 compute-0 sudo[83668]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:13 compute-0 sudo[83820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zstxssdyftlpgkabxwhowgjbhqdpuvun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020652.7288628-301-222520182044932/AnsiballZ_stat.py'
Nov 24 21:44:13 compute-0 sudo[83820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:13 compute-0 python3.9[83822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:13 compute-0 sudo[83820]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:13 compute-0 sudo[83943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txfpopemsjqebvcjrevmpaglsaxwwjzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020652.7288628-301-222520182044932/AnsiballZ_copy.py'
Nov 24 21:44:13 compute-0 sudo[83943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:13 compute-0 python3.9[83945]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020652.7288628-301-222520182044932/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1ba7a82fa2ecf756dd0321c178f63c8432d9f7b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:13 compute-0 sudo[83943]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:15 compute-0 sudo[84095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvgwceziitspkzdknbhvtmcklfgytewj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020654.9493475-361-50236805383368/AnsiballZ_file.py'
Nov 24 21:44:15 compute-0 sudo[84095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:15 compute-0 python3.9[84097]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:15 compute-0 sudo[84095]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:15 compute-0 sudo[84247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kghaaqptkpauefqxdfbshnszwzruoxnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020655.650922-369-79206906195909/AnsiballZ_stat.py'
Nov 24 21:44:15 compute-0 sudo[84247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:16 compute-0 python3.9[84249]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:16 compute-0 sudo[84247]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:16 compute-0 sudo[84370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfsmxxepmteiggemyqapnclbxzmrwext ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020655.650922-369-79206906195909/AnsiballZ_copy.py'
Nov 24 21:44:16 compute-0 sudo[84370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:16 compute-0 python3.9[84372]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020655.650922-369-79206906195909/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8d7d504665527e13e822e08f71c705a5ecc3040d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:16 compute-0 sudo[84370]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:17 compute-0 sudo[84522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-codgaaxgqbuvnbmpehwjrjmcnzwvineu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020657.1142788-385-208209933612956/AnsiballZ_file.py'
Nov 24 21:44:17 compute-0 sudo[84522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:17 compute-0 python3.9[84524]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:17 compute-0 sudo[84522]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:18 compute-0 sudo[84674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnookkhbpgalacpktrwbmfeiaitrjybf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020658.0119503-393-227476102233960/AnsiballZ_stat.py'
Nov 24 21:44:18 compute-0 sudo[84674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:18 compute-0 python3.9[84676]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:18 compute-0 sudo[84674]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:19 compute-0 sudo[84797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqpexfluefndlqjezscrmsfnggiaujep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020658.0119503-393-227476102233960/AnsiballZ_copy.py'
Nov 24 21:44:19 compute-0 sudo[84797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:19 compute-0 python3.9[84799]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020658.0119503-393-227476102233960/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8d7d504665527e13e822e08f71c705a5ecc3040d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:19 compute-0 sudo[84797]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:19 compute-0 sudo[84949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bziqduymcwilrckzemetluftmcaukvlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020659.549219-409-89258225123042/AnsiballZ_file.py'
Nov 24 21:44:19 compute-0 sudo[84949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:20 compute-0 python3.9[84951]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:20 compute-0 sudo[84949]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:20 compute-0 sudo[85101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnvuvhooiynonnyeqtfyagwtgldsmxic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020660.4025087-417-265545682749378/AnsiballZ_stat.py'
Nov 24 21:44:20 compute-0 sudo[85101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:21 compute-0 python3.9[85103]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:21 compute-0 sudo[85101]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:21 compute-0 sudo[85224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roiuyypmvxkivksnrhekwgkagncijpyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020660.4025087-417-265545682749378/AnsiballZ_copy.py'
Nov 24 21:44:21 compute-0 sudo[85224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:21 compute-0 python3.9[85226]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020660.4025087-417-265545682749378/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8d7d504665527e13e822e08f71c705a5ecc3040d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:21 compute-0 sudo[85224]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:22 compute-0 sudo[85376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlevnfitxlpytrwvusdjmlvkbifpgsnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020661.9738524-433-35744481635859/AnsiballZ_file.py'
Nov 24 21:44:22 compute-0 sudo[85376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:22 compute-0 python3.9[85378]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:22 compute-0 sudo[85376]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:23 compute-0 sudo[85528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmdoeetcmgketrhjdobkaorptrbyzfie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020662.7300377-441-171372705504240/AnsiballZ_stat.py'
Nov 24 21:44:23 compute-0 sudo[85528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:23 compute-0 python3.9[85530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:23 compute-0 sudo[85528]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:23 compute-0 sudo[85651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlwdejfizkyalggztoicrnumhdbjwyye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020662.7300377-441-171372705504240/AnsiballZ_copy.py'
Nov 24 21:44:23 compute-0 sudo[85651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:24 compute-0 python3.9[85653]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020662.7300377-441-171372705504240/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8d7d504665527e13e822e08f71c705a5ecc3040d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:24 compute-0 sudo[85651]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:24 compute-0 sudo[85803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdexprmvzhyozmhbmggbebotyhifsjgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020664.279653-457-87522815993579/AnsiballZ_file.py'
Nov 24 21:44:24 compute-0 sudo[85803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:24 compute-0 python3.9[85805]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:24 compute-0 sudo[85803]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:25 compute-0 sudo[85955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycrbvwrnfsxaanwdxrjcwpejpzceutjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020665.11923-465-78987481834463/AnsiballZ_stat.py'
Nov 24 21:44:25 compute-0 sudo[85955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:25 compute-0 python3.9[85957]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:25 compute-0 sudo[85955]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:26 compute-0 sudo[86078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaxtzgjfcanjahxigihixdfwoksngxfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020665.11923-465-78987481834463/AnsiballZ_copy.py'
Nov 24 21:44:26 compute-0 sudo[86078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:26 compute-0 python3.9[86080]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020665.11923-465-78987481834463/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8d7d504665527e13e822e08f71c705a5ecc3040d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:26 compute-0 sudo[86078]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:26 compute-0 sudo[86230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhjjnjzpbiojvohxbdhdbkhwuuemrjiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020666.5888367-481-196450336496825/AnsiballZ_file.py'
Nov 24 21:44:26 compute-0 sudo[86230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:27 compute-0 python3.9[86232]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:27 compute-0 sudo[86230]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:27 compute-0 sudo[86382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buvfoowvcdaaiwhbumgwenxzobqwjecw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020667.382269-489-130079348053631/AnsiballZ_stat.py'
Nov 24 21:44:27 compute-0 sudo[86382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:27 compute-0 python3.9[86384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:27 compute-0 sudo[86382]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:28 compute-0 sudo[86505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-touebuarvjfbdddynucibhfmnteyfptt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020667.382269-489-130079348053631/AnsiballZ_copy.py'
Nov 24 21:44:28 compute-0 sudo[86505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:28 compute-0 python3.9[86507]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020667.382269-489-130079348053631/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8d7d504665527e13e822e08f71c705a5ecc3040d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:28 compute-0 sudo[86505]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:29 compute-0 sudo[86657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcadxiltkmgvnmrkfinywoyufvnxrwwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020668.9501488-505-232116465170111/AnsiballZ_file.py'
Nov 24 21:44:29 compute-0 sudo[86657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:29 compute-0 python3.9[86659]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:29 compute-0 sudo[86657]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:30 compute-0 sudo[86809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyguukldnbfasnyrqsqwyidxebjbdwwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020669.7711852-513-260501450352470/AnsiballZ_stat.py'
Nov 24 21:44:30 compute-0 sudo[86809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:30 compute-0 python3.9[86811]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:30 compute-0 sudo[86809]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:30 compute-0 sudo[86932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifxzuygkhthrrbbmdesxxiznuvpsghso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020669.7711852-513-260501450352470/AnsiballZ_copy.py'
Nov 24 21:44:30 compute-0 sudo[86932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:31 compute-0 python3.9[86934]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020669.7711852-513-260501450352470/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8d7d504665527e13e822e08f71c705a5ecc3040d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:31 compute-0 sudo[86932]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:31 compute-0 sudo[87084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdpxrcreakdlsnkcbdxstnfkodqpwwsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020671.351314-529-3056968327682/AnsiballZ_file.py'
Nov 24 21:44:31 compute-0 sudo[87084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:31 compute-0 python3.9[87086]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:31 compute-0 sudo[87084]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:32 compute-0 sshd-session[83593]: Connection closed by 138.68.68.245 port 12916 [preauth]
Nov 24 21:44:32 compute-0 sudo[87236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywcfebjxkvrhxcbulrpnlbswzsowcjur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020672.1815088-537-63176246093053/AnsiballZ_stat.py'
Nov 24 21:44:32 compute-0 sudo[87236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:32 compute-0 python3.9[87238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:32 compute-0 sudo[87236]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:33 compute-0 sudo[87359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybgtywudolczyzozlrgzksgcazexctqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020672.1815088-537-63176246093053/AnsiballZ_copy.py'
Nov 24 21:44:33 compute-0 sudo[87359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:33 compute-0 python3.9[87361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020672.1815088-537-63176246093053/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8d7d504665527e13e822e08f71c705a5ecc3040d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:33 compute-0 sudo[87359]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:33 compute-0 sshd-session[78140]: Connection closed by 192.168.122.30 port 51174
Nov 24 21:44:33 compute-0 sshd-session[78137]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:44:33 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Nov 24 21:44:33 compute-0 systemd[1]: session-19.scope: Consumed 40.647s CPU time.
Nov 24 21:44:33 compute-0 systemd-logind[806]: Session 19 logged out. Waiting for processes to exit.
Nov 24 21:44:33 compute-0 systemd-logind[806]: Removed session 19.
Nov 24 21:44:39 compute-0 sshd-session[87386]: Accepted publickey for zuul from 192.168.122.30 port 54510 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:44:39 compute-0 systemd-logind[806]: New session 20 of user zuul.
Nov 24 21:44:39 compute-0 systemd[1]: Started Session 20 of User zuul.
Nov 24 21:44:39 compute-0 sshd-session[87386]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:44:40 compute-0 python3.9[87539]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:44:41 compute-0 sudo[87693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thgbyjybcvwpizgdvxlxhysykczvqkai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020681.307241-34-201621116744320/AnsiballZ_file.py'
Nov 24 21:44:41 compute-0 sudo[87693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:42 compute-0 python3.9[87695]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:42 compute-0 sudo[87693]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:42 compute-0 sudo[87845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egccbssufumtanzqsglkmynmuvxuaksj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020682.1971686-34-230355347831061/AnsiballZ_file.py'
Nov 24 21:44:42 compute-0 sudo[87845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:42 compute-0 python3.9[87847]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:44:42 compute-0 sudo[87845]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:43 compute-0 python3.9[87997]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:44:44 compute-0 sudo[88147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhcvrarbtlncqkytxxsjasoqhvldxrht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020683.860913-57-26641447097163/AnsiballZ_seboolean.py'
Nov 24 21:44:44 compute-0 sudo[88147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:44 compute-0 python3.9[88149]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 21:44:45 compute-0 sudo[88147]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:46 compute-0 sudo[88303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npohgzexznnrlyfyyvoeqzlkjqyaejbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020686.0179384-67-31575576994055/AnsiballZ_setup.py'
Nov 24 21:44:46 compute-0 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 24 21:44:46 compute-0 sudo[88303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:46 compute-0 python3.9[88305]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:44:47 compute-0 sudo[88303]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:47 compute-0 sudo[88387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anxrzxvgqcdydcupgyalbsogdxmofxwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020686.0179384-67-31575576994055/AnsiballZ_dnf.py'
Nov 24 21:44:47 compute-0 sudo[88387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:47 compute-0 python3.9[88389]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:44:48 compute-0 sudo[88387]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:49 compute-0 sudo[88540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rokmmidwxstutdbjcdpeukazbccarxfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020689.0622027-79-261106565348318/AnsiballZ_systemd.py'
Nov 24 21:44:49 compute-0 sudo[88540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:50 compute-0 python3.9[88542]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 21:44:50 compute-0 sudo[88540]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:51 compute-0 sudo[88695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owdmwvuzeyhbfwizdlmpvfxuttndufpg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764020690.5585806-87-75161623183721/AnsiballZ_edpm_nftables_snippet.py'
Nov 24 21:44:51 compute-0 sudo[88695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:51 compute-0 python3[88697]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 24 21:44:51 compute-0 sudo[88695]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:51 compute-0 sudo[88847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnisdrjbsuxhvxbxyagdyuzbfgwlyqhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020691.5437663-96-237250034336686/AnsiballZ_file.py'
Nov 24 21:44:51 compute-0 sudo[88847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:52 compute-0 python3.9[88849]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:52 compute-0 sudo[88847]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:52 compute-0 sudo[88999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnornlmnemzcxwqobznnhyatzaktzyjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020692.3235595-104-47244558640039/AnsiballZ_stat.py'
Nov 24 21:44:52 compute-0 sudo[88999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:53 compute-0 python3.9[89001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:53 compute-0 sudo[88999]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:53 compute-0 sudo[89077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgodtuvezzqdkcblfgfjrzdrntzwkhxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020692.3235595-104-47244558640039/AnsiballZ_file.py'
Nov 24 21:44:53 compute-0 sudo[89077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:53 compute-0 python3.9[89079]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:53 compute-0 sudo[89077]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:54 compute-0 sudo[89229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmujeeoirvoeulhdemmtwxjamymdlfxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020693.7595718-116-278185360164775/AnsiballZ_stat.py'
Nov 24 21:44:54 compute-0 sudo[89229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:54 compute-0 python3.9[89231]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:54 compute-0 sudo[89229]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:55 compute-0 sudo[89307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwlxntetywsffobmccbcaazfkqmsvvdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020693.7595718-116-278185360164775/AnsiballZ_file.py'
Nov 24 21:44:55 compute-0 sudo[89307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:55 compute-0 python3.9[89309]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9z9vsau3 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:55 compute-0 sudo[89307]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:56 compute-0 sudo[89459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kguuugqokplkodobkzwvzgrypbtrzqsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020695.7509022-128-120826805761689/AnsiballZ_stat.py'
Nov 24 21:44:56 compute-0 sudo[89459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:56 compute-0 python3.9[89461]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:56 compute-0 sudo[89459]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:56 compute-0 sudo[89537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elqjcpajjyyrvkzawzfzcafnaaxfckdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020695.7509022-128-120826805761689/AnsiballZ_file.py'
Nov 24 21:44:56 compute-0 sudo[89537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:56 compute-0 python3.9[89539]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:44:56 compute-0 sudo[89537]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:57 compute-0 sudo[89689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcqcionjcchaujvwnhzxvqocrydrqics ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020697.1422408-141-203638347549823/AnsiballZ_command.py'
Nov 24 21:44:57 compute-0 sudo[89689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:57 compute-0 python3.9[89691]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:44:57 compute-0 sudo[89689]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:58 compute-0 sudo[89842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqttbbytzqkxespfbdmtjcuovuszmejf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764020698.1696894-149-107496974685798/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 21:44:58 compute-0 sudo[89842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:58 compute-0 python3[89844]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 21:44:58 compute-0 sudo[89842]: pam_unix(sudo:session): session closed for user root
Nov 24 21:44:59 compute-0 sudo[89994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgsivzfmweuwjykfyqgfqhkxmexkxtcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020699.203516-157-169314871471055/AnsiballZ_stat.py'
Nov 24 21:44:59 compute-0 sudo[89994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:44:59 compute-0 python3.9[89996]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:44:59 compute-0 sudo[89994]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:00 compute-0 sudo[90119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyqnutvqkjgleejmdbpnvqkalqpwpdhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020699.203516-157-169314871471055/AnsiballZ_copy.py'
Nov 24 21:45:00 compute-0 sudo[90119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:01 compute-0 python3.9[90121]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020699.203516-157-169314871471055/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:01 compute-0 sudo[90119]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:02 compute-0 sudo[90271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwxbdrdxokourhanagewgjebgevbwsdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020702.091899-172-91231126362287/AnsiballZ_stat.py'
Nov 24 21:45:02 compute-0 sudo[90271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:02 compute-0 python3.9[90273]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:45:02 compute-0 sudo[90271]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:03 compute-0 sudo[90396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmjepjjstzqwccjjqoydtmnnhrjppyhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020702.091899-172-91231126362287/AnsiballZ_copy.py'
Nov 24 21:45:03 compute-0 sudo[90396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:03 compute-0 python3.9[90398]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020702.091899-172-91231126362287/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:03 compute-0 sudo[90396]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:04 compute-0 sudo[90548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvxweynwymyekpxvzbaulloiulysoxpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020703.6313055-187-137845760786577/AnsiballZ_stat.py'
Nov 24 21:45:04 compute-0 sudo[90548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:04 compute-0 python3.9[90550]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:45:04 compute-0 sudo[90548]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:04 compute-0 sudo[90673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vniontecceqcpdcilqvoersyeugoekvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020703.6313055-187-137845760786577/AnsiballZ_copy.py'
Nov 24 21:45:04 compute-0 sudo[90673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:04 compute-0 python3.9[90675]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020703.6313055-187-137845760786577/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:04 compute-0 sudo[90673]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:05 compute-0 sudo[90825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tltxydsmshsxqjqzkwwkndlwilxczsye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020705.1286938-202-249851669168861/AnsiballZ_stat.py'
Nov 24 21:45:05 compute-0 sudo[90825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:05 compute-0 python3.9[90827]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:45:05 compute-0 sudo[90825]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:06 compute-0 sudo[90950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffuytlxkmgfdqqtvdnxwurjtvgxxueuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020705.1286938-202-249851669168861/AnsiballZ_copy.py'
Nov 24 21:45:06 compute-0 sudo[90950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:06 compute-0 python3.9[90952]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020705.1286938-202-249851669168861/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:06 compute-0 sudo[90950]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:07 compute-0 sudo[91102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psmluwiasxnyutnysfszfxnwnvcdlsbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020706.6281393-217-264177484500469/AnsiballZ_stat.py'
Nov 24 21:45:07 compute-0 sudo[91102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:07 compute-0 python3.9[91104]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:45:07 compute-0 sudo[91102]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:07 compute-0 sudo[91227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svkyuyrsyxvxhgfceqbozwootcdlyrng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020706.6281393-217-264177484500469/AnsiballZ_copy.py'
Nov 24 21:45:07 compute-0 sudo[91227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:07 compute-0 python3.9[91229]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764020706.6281393-217-264177484500469/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:07 compute-0 sudo[91227]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:08 compute-0 sudo[91379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogophfayrbjdqzvrfchjqxayzsshavaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020708.0804515-232-194195307279098/AnsiballZ_file.py'
Nov 24 21:45:08 compute-0 sudo[91379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:08 compute-0 python3.9[91381]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:08 compute-0 sudo[91379]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:09 compute-0 sudo[91531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqqatseanpzdyicmghtaszxkpjefonry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020708.8780253-240-36841127116097/AnsiballZ_command.py'
Nov 24 21:45:09 compute-0 sudo[91531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:09 compute-0 python3.9[91533]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:45:09 compute-0 sudo[91531]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:10 compute-0 sudo[91686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orsdkuahcmlhbgruxaogpjpildmawfiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020709.7467706-248-96151815677652/AnsiballZ_blockinfile.py'
Nov 24 21:45:10 compute-0 sudo[91686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:10 compute-0 python3.9[91688]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:10 compute-0 sudo[91686]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:11 compute-0 sudo[91838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eghdpzislyaipyofwekwjbgyhbufmvdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020710.75094-257-273536998088570/AnsiballZ_command.py'
Nov 24 21:45:11 compute-0 sudo[91838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:11 compute-0 python3.9[91840]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:45:11 compute-0 sudo[91838]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:11 compute-0 sudo[91991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivnkvakiriazquymmhpeqeekdfvzxjwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020711.5989819-265-110059301801360/AnsiballZ_stat.py'
Nov 24 21:45:11 compute-0 sudo[91991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:12 compute-0 python3.9[91993]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:45:12 compute-0 sudo[91991]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:12 compute-0 sudo[92145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcqugsdyvbsfrbbqactxczdhnavvviyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020712.3532348-273-130128393280679/AnsiballZ_command.py'
Nov 24 21:45:12 compute-0 sudo[92145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:12 compute-0 python3.9[92147]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:45:12 compute-0 sudo[92145]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:13 compute-0 sudo[92300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iujuptwixigokrwdpvwzkoldsyfywngp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020713.264101-281-112747944441364/AnsiballZ_file.py'
Nov 24 21:45:13 compute-0 sudo[92300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:13 compute-0 python3.9[92302]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:13 compute-0 sudo[92300]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:15 compute-0 python3.9[92452]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:45:16 compute-0 sudo[92603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgmluawsfzwgmejvxwgqubmkebhorebb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020715.8779325-321-165298980887963/AnsiballZ_command.py'
Nov 24 21:45:16 compute-0 sudo[92603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:16 compute-0 python3.9[92605]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:45:16 compute-0 ovs-vsctl[92606]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 24 21:45:16 compute-0 sudo[92603]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:17 compute-0 sudo[92756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nreaedgupfpnvpsakpajvchnvvnqaljq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020716.7900982-330-80507934396316/AnsiballZ_command.py'
Nov 24 21:45:17 compute-0 sudo[92756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:17 compute-0 python3.9[92758]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:45:17 compute-0 sudo[92756]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:17 compute-0 sudo[92913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xleiuetbdtxwzebgqwzrtnwbuitadksa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020717.5616696-338-84616477653261/AnsiballZ_command.py'
Nov 24 21:45:17 compute-0 sudo[92913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:18 compute-0 python3.9[92915]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:45:18 compute-0 ovs-vsctl[92916]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 24 21:45:18 compute-0 sudo[92913]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:18 compute-0 sshd-session[92786]: Received disconnect from 80.94.93.119 port 41926:11:  [preauth]
Nov 24 21:45:18 compute-0 sshd-session[92786]: Disconnected from authenticating user root 80.94.93.119 port 41926 [preauth]
Nov 24 21:45:18 compute-0 python3.9[93066]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:45:19 compute-0 sudo[93218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiyljigvhyigohxhxgwllhvvsjqswofd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020719.2195735-355-181866386093445/AnsiballZ_file.py'
Nov 24 21:45:19 compute-0 sudo[93218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:19 compute-0 python3.9[93220]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:45:19 compute-0 sudo[93218]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:20 compute-0 sudo[93370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsobwvsjftytobviulmixdvhrlqlqgxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020719.9666119-363-253284201426606/AnsiballZ_stat.py'
Nov 24 21:45:20 compute-0 sudo[93370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:20 compute-0 python3.9[93372]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:45:20 compute-0 sudo[93370]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:20 compute-0 sudo[93448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxuljpmwnemucemuolefjyxyearduuhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020719.9666119-363-253284201426606/AnsiballZ_file.py'
Nov 24 21:45:20 compute-0 sudo[93448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:21 compute-0 python3.9[93450]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:45:21 compute-0 sudo[93448]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:21 compute-0 sudo[93600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erbligelsahfsthicimgnrcsaqvcildr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020721.3496192-363-50184841245246/AnsiballZ_stat.py'
Nov 24 21:45:21 compute-0 sudo[93600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:21 compute-0 python3.9[93602]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:45:21 compute-0 sudo[93600]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:22 compute-0 sudo[93678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvwmhyasqikyvdwcbvaxgktbafjkskqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020721.3496192-363-50184841245246/AnsiballZ_file.py'
Nov 24 21:45:22 compute-0 sudo[93678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:22 compute-0 python3.9[93680]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:45:22 compute-0 sudo[93678]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:23 compute-0 sudo[93830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmbtgunwndxjwswymgtsfcfbwrtsfinh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020722.6656141-386-101806896380684/AnsiballZ_file.py'
Nov 24 21:45:23 compute-0 sudo[93830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:23 compute-0 python3.9[93832]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:23 compute-0 sudo[93830]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:23 compute-0 sudo[93982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsiumdetljovkoizptnzsalmmxdpvtvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020723.4401512-394-273450191343440/AnsiballZ_stat.py'
Nov 24 21:45:23 compute-0 sudo[93982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:23 compute-0 python3.9[93984]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:45:24 compute-0 sudo[93982]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:24 compute-0 sudo[94060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bddafpcfglysrodsrhhkehqjryiomtpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020723.4401512-394-273450191343440/AnsiballZ_file.py'
Nov 24 21:45:24 compute-0 sudo[94060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:24 compute-0 python3.9[94062]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:24 compute-0 sudo[94060]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:25 compute-0 sudo[94212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtxrxlrwmuzigkyaitzrrkjdkpmlnrvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020724.6984572-406-151177254155706/AnsiballZ_stat.py'
Nov 24 21:45:25 compute-0 sudo[94212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:25 compute-0 python3.9[94214]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:45:25 compute-0 sudo[94212]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:25 compute-0 sudo[94290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfdfraqyoeyggsopphsmwchdawxjgcgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020724.6984572-406-151177254155706/AnsiballZ_file.py'
Nov 24 21:45:25 compute-0 sudo[94290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:25 compute-0 python3.9[94292]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:25 compute-0 sudo[94290]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:26 compute-0 sudo[94442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuywwcwcoqtlarkmwniqhuvanmxwcnro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020726.062456-418-41700464712025/AnsiballZ_systemd.py'
Nov 24 21:45:26 compute-0 sudo[94442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:26 compute-0 python3.9[94444]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:45:26 compute-0 systemd[1]: Reloading.
Nov 24 21:45:26 compute-0 systemd-rc-local-generator[94474]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:45:26 compute-0 systemd-sysv-generator[94478]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:45:27 compute-0 sudo[94442]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:27 compute-0 sudo[94633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cocvspinhfjwcotwfkgvssscwyfduzfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020727.3169022-426-32185839929737/AnsiballZ_stat.py'
Nov 24 21:45:27 compute-0 sudo[94633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:27 compute-0 python3.9[94635]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:45:27 compute-0 sudo[94633]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:28 compute-0 sudo[94711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwrcvufmnawmmwomeesnnglovrsnrbcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020727.3169022-426-32185839929737/AnsiballZ_file.py'
Nov 24 21:45:28 compute-0 sudo[94711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:28 compute-0 python3.9[94713]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:28 compute-0 sudo[94711]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:29 compute-0 sudo[94863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgxetzrchshqonucltweyyzicfmzhnhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020728.7048223-438-106229515301422/AnsiballZ_stat.py'
Nov 24 21:45:29 compute-0 sudo[94863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:29 compute-0 python3.9[94865]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:45:29 compute-0 sudo[94863]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:29 compute-0 sudo[94941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gysidvwlvypgykmuyexyanwayvuckhhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020728.7048223-438-106229515301422/AnsiballZ_file.py'
Nov 24 21:45:29 compute-0 sudo[94941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:29 compute-0 python3.9[94943]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:29 compute-0 sudo[94941]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:30 compute-0 sudo[95093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahhagdiuviunjycjzisbqtwrtumbstxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020730.0433376-450-186205485582910/AnsiballZ_systemd.py'
Nov 24 21:45:30 compute-0 sudo[95093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:30 compute-0 python3.9[95095]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:45:30 compute-0 systemd[1]: Reloading.
Nov 24 21:45:30 compute-0 systemd-rc-local-generator[95120]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:45:30 compute-0 systemd-sysv-generator[95125]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:45:31 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 21:45:31 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 21:45:31 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 21:45:31 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 21:45:31 compute-0 sudo[95093]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:31 compute-0 sudo[95288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-defywwwwfmxfmixhmotkujifinjxupfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020731.4448254-460-115586390573198/AnsiballZ_file.py'
Nov 24 21:45:31 compute-0 sudo[95288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:32 compute-0 python3.9[95290]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:45:32 compute-0 sudo[95288]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:32 compute-0 sudo[95440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsfbdvvuwvzljjzznsmuayujestmrlwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020732.271801-468-55198242150398/AnsiballZ_stat.py'
Nov 24 21:45:32 compute-0 sudo[95440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:32 compute-0 python3.9[95442]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:45:32 compute-0 sudo[95440]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:33 compute-0 sudo[95563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfgqcwribtpwjupvneyyzrjancfywatv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020732.271801-468-55198242150398/AnsiballZ_copy.py'
Nov 24 21:45:33 compute-0 sudo[95563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:33 compute-0 python3.9[95565]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764020732.271801-468-55198242150398/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:45:33 compute-0 sudo[95563]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:34 compute-0 sudo[95715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haozivvwkjarzdfagfalkwtnghztnoid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020734.0625138-485-152853242030385/AnsiballZ_file.py'
Nov 24 21:45:34 compute-0 sudo[95715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:34 compute-0 python3.9[95717]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:45:34 compute-0 sudo[95715]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:35 compute-0 sudo[95867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcsokpfpqqubemwftespszrgyjdnfwdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020734.9100778-493-7576453412970/AnsiballZ_stat.py'
Nov 24 21:45:35 compute-0 sudo[95867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:35 compute-0 python3.9[95869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:45:35 compute-0 sudo[95867]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:36 compute-0 sudo[95990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lofujsqwrucbzrozvpzihplwzkvboyis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020734.9100778-493-7576453412970/AnsiballZ_copy.py'
Nov 24 21:45:36 compute-0 sudo[95990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:36 compute-0 python3.9[95992]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764020734.9100778-493-7576453412970/.source.json _original_basename=.6k35pbhz follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:36 compute-0 sudo[95990]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:36 compute-0 sudo[96142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwramfddlvgiqgjcqjochrfatbwgmrki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020736.4390655-508-8498257025504/AnsiballZ_file.py'
Nov 24 21:45:36 compute-0 sudo[96142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:37 compute-0 python3.9[96144]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:37 compute-0 sudo[96142]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:37 compute-0 sudo[96294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whnlxyoeeaqgtbdvwakjvktgfwbxhnhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020737.3037477-516-229518030809928/AnsiballZ_stat.py'
Nov 24 21:45:37 compute-0 sudo[96294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:37 compute-0 sudo[96294]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:38 compute-0 sudo[96417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzqbglxtounhkhlvxqakqlenzngyqjvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020737.3037477-516-229518030809928/AnsiballZ_copy.py'
Nov 24 21:45:38 compute-0 sudo[96417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:38 compute-0 sudo[96417]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:39 compute-0 sudo[96569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crnoweixybggcylchkzqepvbjedkodap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020738.974948-533-212534603715330/AnsiballZ_container_config_data.py'
Nov 24 21:45:39 compute-0 sudo[96569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:39 compute-0 python3.9[96571]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 24 21:45:39 compute-0 sudo[96569]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:40 compute-0 sudo[96721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwyrcgdegedsklcklwoowdobkfsdeoyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020740.000791-542-193470224919303/AnsiballZ_container_config_hash.py'
Nov 24 21:45:40 compute-0 sudo[96721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:40 compute-0 python3.9[96723]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 21:45:40 compute-0 sudo[96721]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:41 compute-0 sudo[96873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyhtxwdqclegatfgskpvyxbjfcxqbtgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020741.1088965-551-178905285458318/AnsiballZ_podman_container_info.py'
Nov 24 21:45:41 compute-0 sudo[96873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:41 compute-0 python3.9[96875]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 21:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:45:41 compute-0 sudo[96873]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:43 compute-0 sudo[97036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvwzpihcsrqmdevliuqbeeakxbrbpfzn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764020742.4986713-564-202620509630326/AnsiballZ_edpm_container_manage.py'
Nov 24 21:45:43 compute-0 sudo[97036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:43 compute-0 python3[97038]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 21:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:45:43 compute-0 podman[97075]: 2025-11-24 21:45:43.498375104 +0000 UTC m=+0.056854964 container create d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:45:43 compute-0 podman[97075]: 2025-11-24 21:45:43.470696381 +0000 UTC m=+0.029176261 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 21:45:43 compute-0 python3[97038]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 21:45:43 compute-0 sudo[97036]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:44 compute-0 sudo[97263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpzwuqlfyonrbrqgklippipoubtnyczt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020743.8262331-572-252326615993734/AnsiballZ_stat.py'
Nov 24 21:45:44 compute-0 sudo[97263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:44 compute-0 python3.9[97265]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 21:45:44 compute-0 sudo[97263]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:45 compute-0 sudo[97417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pklbhramtjisimkrgzvpejfqkdscybzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020744.6862288-581-209569491964382/AnsiballZ_file.py'
Nov 24 21:45:45 compute-0 sudo[97417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:45 compute-0 python3.9[97419]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:45 compute-0 sudo[97417]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:45 compute-0 sudo[97493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqaeevdtqlpgrhsmxdraevaytrufvpou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020744.6862288-581-209569491964382/AnsiballZ_stat.py'
Nov 24 21:45:45 compute-0 sudo[97493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:45 compute-0 python3.9[97495]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:45:45 compute-0 sudo[97493]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:46 compute-0 sudo[97644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anedpghyscmdhxowpmkvpayqonuavexn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020745.9378076-581-51461136000845/AnsiballZ_copy.py'
Nov 24 21:45:46 compute-0 sudo[97644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:46 compute-0 python3.9[97646]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764020745.9378076-581-51461136000845/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:45:46 compute-0 sudo[97644]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:47 compute-0 sudo[97720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qinpehwfurutxowypiscjpcgfbqftjdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020745.9378076-581-51461136000845/AnsiballZ_systemd.py'
Nov 24 21:45:47 compute-0 sudo[97720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:47 compute-0 python3.9[97722]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:45:47 compute-0 systemd[1]: Reloading.
Nov 24 21:45:47 compute-0 systemd-rc-local-generator[97745]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:45:47 compute-0 systemd-sysv-generator[97752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:45:47 compute-0 sudo[97720]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:47 compute-0 sudo[97831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzewtefthtlzqmflhqetgpiyquwytlgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020745.9378076-581-51461136000845/AnsiballZ_systemd.py'
Nov 24 21:45:47 compute-0 sudo[97831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:48 compute-0 python3.9[97833]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:45:48 compute-0 systemd[1]: Reloading.
Nov 24 21:45:48 compute-0 systemd-rc-local-generator[97863]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:45:48 compute-0 systemd-sysv-generator[97866]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:45:48 compute-0 systemd[1]: Starting ovn_controller container...
Nov 24 21:45:48 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 24 21:45:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0bc25092ea9190d2c8bce4d1f6da2414a6514cc4c9217e1325aa2029ccbd24c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 24 21:45:48 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94.
Nov 24 21:45:48 compute-0 podman[97874]: 2025-11-24 21:45:48.795014739 +0000 UTC m=+0.156013962 container init d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 21:45:48 compute-0 ovn_controller[97889]: + sudo -E kolla_set_configs
Nov 24 21:45:48 compute-0 podman[97874]: 2025-11-24 21:45:48.835161266 +0000 UTC m=+0.196160408 container start d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 24 21:45:48 compute-0 edpm-start-podman-container[97874]: ovn_controller
Nov 24 21:45:48 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 24 21:45:48 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 24 21:45:48 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 24 21:45:48 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 24 21:45:48 compute-0 systemd[97928]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 24 21:45:48 compute-0 edpm-start-podman-container[97873]: Creating additional drop-in dependency for "ovn_controller" (d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94)
Nov 24 21:45:48 compute-0 podman[97896]: 2025-11-24 21:45:48.932581989 +0000 UTC m=+0.087846231 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 21:45:48 compute-0 systemd[1]: d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94-68bcdda44382dc95.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 21:45:48 compute-0 systemd[1]: d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94-68bcdda44382dc95.service: Failed with result 'exit-code'.
Nov 24 21:45:48 compute-0 systemd[1]: Reloading.
Nov 24 21:45:49 compute-0 systemd[97928]: Queued start job for default target Main User Target.
Nov 24 21:45:49 compute-0 systemd[97928]: Created slice User Application Slice.
Nov 24 21:45:49 compute-0 systemd[97928]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 24 21:45:49 compute-0 systemd[97928]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 21:45:49 compute-0 systemd[97928]: Reached target Paths.
Nov 24 21:45:49 compute-0 systemd[97928]: Reached target Timers.
Nov 24 21:45:49 compute-0 systemd[97928]: Starting D-Bus User Message Bus Socket...
Nov 24 21:45:49 compute-0 systemd[97928]: Starting Create User's Volatile Files and Directories...
Nov 24 21:45:49 compute-0 systemd-rc-local-generator[97976]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:45:49 compute-0 systemd-sysv-generator[97979]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:45:49 compute-0 systemd[97928]: Finished Create User's Volatile Files and Directories.
Nov 24 21:45:49 compute-0 systemd[97928]: Listening on D-Bus User Message Bus Socket.
Nov 24 21:45:49 compute-0 systemd[97928]: Reached target Sockets.
Nov 24 21:45:49 compute-0 systemd[97928]: Reached target Basic System.
Nov 24 21:45:49 compute-0 systemd[97928]: Reached target Main User Target.
Nov 24 21:45:49 compute-0 systemd[97928]: Startup finished in 110ms.
Nov 24 21:45:49 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 24 21:45:49 compute-0 systemd[1]: Started Session c1 of User root.
Nov 24 21:45:49 compute-0 systemd[1]: Started ovn_controller container.
Nov 24 21:45:49 compute-0 sudo[97831]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:49 compute-0 ovn_controller[97889]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 21:45:49 compute-0 ovn_controller[97889]: INFO:__main__:Validating config file
Nov 24 21:45:49 compute-0 ovn_controller[97889]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 21:45:49 compute-0 ovn_controller[97889]: INFO:__main__:Writing out command to execute
Nov 24 21:45:49 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 24 21:45:49 compute-0 ovn_controller[97889]: ++ cat /run_command
Nov 24 21:45:49 compute-0 ovn_controller[97889]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 21:45:49 compute-0 ovn_controller[97889]: + ARGS=
Nov 24 21:45:49 compute-0 ovn_controller[97889]: + sudo kolla_copy_cacerts
Nov 24 21:45:49 compute-0 systemd[1]: Started Session c2 of User root.
Nov 24 21:45:49 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 24 21:45:49 compute-0 ovn_controller[97889]: + [[ ! -n '' ]]
Nov 24 21:45:49 compute-0 ovn_controller[97889]: + . kolla_extend_start
Nov 24 21:45:49 compute-0 ovn_controller[97889]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 24 21:45:49 compute-0 ovn_controller[97889]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 21:45:49 compute-0 ovn_controller[97889]: + umask 0022
Nov 24 21:45:49 compute-0 ovn_controller[97889]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 24 21:45:49 compute-0 NetworkManager[56413]: <info>  [1764020749.4271] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 24 21:45:49 compute-0 NetworkManager[56413]: <info>  [1764020749.4281] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 21:45:49 compute-0 NetworkManager[56413]: <info>  [1764020749.4296] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Nov 24 21:45:49 compute-0 NetworkManager[56413]: <info>  [1764020749.4304] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Nov 24 21:45:49 compute-0 NetworkManager[56413]: <info>  [1764020749.4310] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 21:45:49 compute-0 kernel: br-int: entered promiscuous mode
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00011|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00012|features|INFO|OVS Feature: ct_flush, state: supported
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00013|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00014|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00015|main|INFO|OVS feature set changed, force recompute.
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00016|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00019|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00020|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 21:45:49 compute-0 ovn_controller[97889]: 2025-11-24T21:45:49Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 21:45:49 compute-0 NetworkManager[56413]: <info>  [1764020749.4552] manager: (ovn-dcbc06-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 24 21:45:49 compute-0 systemd-udevd[98046]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 21:45:49 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 24 21:45:49 compute-0 NetworkManager[56413]: <info>  [1764020749.4822] device (genev_sys_6081): carrier: link connected
Nov 24 21:45:49 compute-0 NetworkManager[56413]: <info>  [1764020749.4825] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Nov 24 21:45:49 compute-0 systemd-udevd[98047]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 21:45:49 compute-0 sudo[98153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trowsmkpisgsfvqexpkhnwsywkaawcky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020749.4605985-609-109345190432519/AnsiballZ_command.py'
Nov 24 21:45:49 compute-0 sudo[98153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:50 compute-0 python3.9[98155]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:45:50 compute-0 ovs-vsctl[98156]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 24 21:45:50 compute-0 sudo[98153]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:50 compute-0 sudo[98306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxhhgkrzpcxsgrhyhkgtnhcvhhqtbyhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020750.2941651-617-215751863957915/AnsiballZ_command.py'
Nov 24 21:45:50 compute-0 sudo[98306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:50 compute-0 python3.9[98308]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:45:50 compute-0 ovs-vsctl[98310]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 24 21:45:50 compute-0 sudo[98306]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:51 compute-0 sudo[98461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izzcxevfdyaopipgpcesfkpurewyceno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020751.3939774-631-221790983724921/AnsiballZ_command.py'
Nov 24 21:45:51 compute-0 sudo[98461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:45:51 compute-0 python3.9[98463]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:45:51 compute-0 ovs-vsctl[98464]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 24 21:45:51 compute-0 sudo[98461]: pam_unix(sudo:session): session closed for user root
Nov 24 21:45:52 compute-0 sshd-session[87389]: Connection closed by 192.168.122.30 port 54510
Nov 24 21:45:52 compute-0 sshd-session[87386]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:45:52 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Nov 24 21:45:52 compute-0 systemd[1]: session-20.scope: Consumed 53.223s CPU time.
Nov 24 21:45:52 compute-0 systemd-logind[806]: Session 20 logged out. Waiting for processes to exit.
Nov 24 21:45:52 compute-0 systemd-logind[806]: Removed session 20.
Nov 24 21:45:57 compute-0 sshd-session[98489]: Accepted publickey for zuul from 192.168.122.30 port 53696 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:45:57 compute-0 systemd-logind[806]: New session 22 of user zuul.
Nov 24 21:45:57 compute-0 systemd[1]: Started Session 22 of User zuul.
Nov 24 21:45:57 compute-0 sshd-session[98489]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:45:59 compute-0 python3.9[98642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:45:59 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 24 21:45:59 compute-0 systemd[97928]: Activating special unit Exit the Session...
Nov 24 21:45:59 compute-0 systemd[97928]: Stopped target Main User Target.
Nov 24 21:45:59 compute-0 systemd[97928]: Stopped target Basic System.
Nov 24 21:45:59 compute-0 systemd[97928]: Stopped target Paths.
Nov 24 21:45:59 compute-0 systemd[97928]: Stopped target Sockets.
Nov 24 21:45:59 compute-0 systemd[97928]: Stopped target Timers.
Nov 24 21:45:59 compute-0 systemd[97928]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 21:45:59 compute-0 systemd[97928]: Closed D-Bus User Message Bus Socket.
Nov 24 21:45:59 compute-0 systemd[97928]: Stopped Create User's Volatile Files and Directories.
Nov 24 21:45:59 compute-0 systemd[97928]: Removed slice User Application Slice.
Nov 24 21:45:59 compute-0 systemd[97928]: Reached target Shutdown.
Nov 24 21:45:59 compute-0 systemd[97928]: Finished Exit the Session.
Nov 24 21:45:59 compute-0 systemd[97928]: Reached target Exit the Session.
Nov 24 21:45:59 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 24 21:45:59 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 24 21:45:59 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 24 21:45:59 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 24 21:45:59 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 24 21:45:59 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 24 21:45:59 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 24 21:46:00 compute-0 sudo[98799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghcrfkqqbxrqccbsvwyfaaufzxhxozyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020759.9573004-34-267222373337668/AnsiballZ_file.py'
Nov 24 21:46:00 compute-0 sudo[98799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:00 compute-0 python3.9[98801]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:00 compute-0 sudo[98799]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:01 compute-0 sudo[98951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epzlmorvbzoastdkggywfpmtdxczjvlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020761.0173178-34-113265995353163/AnsiballZ_file.py'
Nov 24 21:46:01 compute-0 sudo[98951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:01 compute-0 python3.9[98953]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:01 compute-0 sudo[98951]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:02 compute-0 sudo[99103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtoxyswrqbavqvyvjxqsrfzyxeglgmig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020761.7993252-34-23020007528300/AnsiballZ_file.py'
Nov 24 21:46:02 compute-0 sudo[99103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:02 compute-0 python3.9[99105]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:02 compute-0 sudo[99103]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:02 compute-0 sudo[99256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnlwrpiucejtocgxpturetnupsixryrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020762.5543926-34-274709081065931/AnsiballZ_file.py'
Nov 24 21:46:02 compute-0 sudo[99256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:03 compute-0 python3.9[99258]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:03 compute-0 sudo[99256]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:03 compute-0 sudo[99408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecsvjthfcwmqsuswirenwntrbepdygxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020763.330226-34-6514482943438/AnsiballZ_file.py'
Nov 24 21:46:03 compute-0 sudo[99408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:03 compute-0 python3.9[99410]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:03 compute-0 sudo[99408]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:04 compute-0 python3.9[99560]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:46:05 compute-0 sudo[99710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyhpsjkwzfpsbxfonklzbtdcugjjlqpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020765.071193-78-206365396689763/AnsiballZ_seboolean.py'
Nov 24 21:46:05 compute-0 sudo[99710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:05 compute-0 python3.9[99712]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 21:46:06 compute-0 sudo[99710]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:06 compute-0 sshd-session[99760]: Invalid user ubuntu from 45.148.10.240 port 38704
Nov 24 21:46:07 compute-0 sshd-session[99760]: Connection closed by invalid user ubuntu 45.148.10.240 port 38704 [preauth]
Nov 24 21:46:07 compute-0 python3.9[99864]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:08 compute-0 python3.9[99986]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764020766.5774539-86-25174006704913/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:09 compute-0 python3.9[100136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:09 compute-0 python3.9[100257]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764020768.2747266-101-239233593680982/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:10 compute-0 sudo[100407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpljxljgllboqxmcczhckaclfxeakjaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020770.1284077-118-169706145197624/AnsiballZ_setup.py'
Nov 24 21:46:10 compute-0 sudo[100407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:10 compute-0 python3.9[100409]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:46:11 compute-0 sudo[100407]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:11 compute-0 sudo[100491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emxgapirnyxqgwrynscecwrkfabmfica ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020770.1284077-118-169706145197624/AnsiballZ_dnf.py'
Nov 24 21:46:11 compute-0 sudo[100491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:11 compute-0 python3.9[100493]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:46:13 compute-0 sudo[100491]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:14 compute-0 sudo[100644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwxtuuxnjbqwoumjbhnvzionfwesdckg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020773.4286988-130-226264984871694/AnsiballZ_systemd.py'
Nov 24 21:46:14 compute-0 sudo[100644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:14 compute-0 python3.9[100646]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 21:46:14 compute-0 sudo[100644]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:15 compute-0 python3.9[100799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:15 compute-0 python3.9[100920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764020774.7535741-138-137185668657575/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:16 compute-0 python3.9[101070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:17 compute-0 python3.9[101191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764020776.1204998-138-57101334855076/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:18 compute-0 python3.9[101341]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:19 compute-0 ovn_controller[97889]: 2025-11-24T21:46:19Z|00025|memory|INFO|16128 kB peak resident set size after 30.0 seconds
Nov 24 21:46:19 compute-0 ovn_controller[97889]: 2025-11-24T21:46:19Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 24 21:46:19 compute-0 podman[101436]: 2025-11-24 21:46:19.392306887 +0000 UTC m=+0.106308974 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 24 21:46:19 compute-0 python3.9[101475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764020778.3201861-182-123612124527077/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:20 compute-0 python3.9[101638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:20 compute-0 python3.9[101759]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764020779.642314-182-255726832116791/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:21 compute-0 python3.9[101909]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:46:22 compute-0 sudo[102061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seyqfrnrnufrhpkwuuapuriwtaaxehrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020781.9190242-220-145341698679373/AnsiballZ_file.py'
Nov 24 21:46:22 compute-0 sudo[102061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:22 compute-0 python3.9[102063]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:22 compute-0 sudo[102061]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:23 compute-0 sudo[102213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iisswzsdwzteebvfrvxhysycsqjxxqkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020782.749966-228-132754321257521/AnsiballZ_stat.py'
Nov 24 21:46:23 compute-0 sudo[102213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:23 compute-0 python3.9[102215]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:23 compute-0 sudo[102213]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:23 compute-0 sudo[102291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owiyagbedylwmcpdikqpvptnvpmezdje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020782.749966-228-132754321257521/AnsiballZ_file.py'
Nov 24 21:46:23 compute-0 sudo[102291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:23 compute-0 python3.9[102293]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:23 compute-0 sudo[102291]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:24 compute-0 sudo[102443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzfgadqtrjllmpegtpovwrnaogfkxvlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020784.0636404-228-79726716872419/AnsiballZ_stat.py'
Nov 24 21:46:24 compute-0 sudo[102443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:24 compute-0 python3.9[102445]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:24 compute-0 sudo[102443]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:24 compute-0 sudo[102521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptiinswqmouhrpgvxfybiycorcodyaek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020784.0636404-228-79726716872419/AnsiballZ_file.py'
Nov 24 21:46:24 compute-0 sudo[102521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:25 compute-0 python3.9[102523]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:25 compute-0 sudo[102521]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:25 compute-0 sudo[102673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnriakoydoitsmongekzoeuaigejvggj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020785.4577785-251-278199709292176/AnsiballZ_file.py'
Nov 24 21:46:25 compute-0 sudo[102673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:26 compute-0 python3.9[102675]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:46:26 compute-0 sudo[102673]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:26 compute-0 sudo[102825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygfbmnjzzirhicqbbuwthtktrmifcfgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020786.3214705-259-127136538566931/AnsiballZ_stat.py'
Nov 24 21:46:26 compute-0 sudo[102825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:26 compute-0 python3.9[102827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:26 compute-0 sudo[102825]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:27 compute-0 sudo[102903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jocorrckhwgwpmgwrnwxanzioevgxkei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020786.3214705-259-127136538566931/AnsiballZ_file.py'
Nov 24 21:46:27 compute-0 sudo[102903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:27 compute-0 python3.9[102905]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:46:27 compute-0 sudo[102903]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:28 compute-0 sudo[103055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkyckbuvzhgatbbumlcrtrrhzsoffkui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020787.6640167-271-241385156174188/AnsiballZ_stat.py'
Nov 24 21:46:28 compute-0 sudo[103055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:28 compute-0 python3.9[103057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:28 compute-0 sudo[103055]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:28 compute-0 sudo[103133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hefcymgxogcvoyotxjjulflvktluchve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020787.6640167-271-241385156174188/AnsiballZ_file.py'
Nov 24 21:46:28 compute-0 sudo[103133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:28 compute-0 python3.9[103135]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:46:28 compute-0 sudo[103133]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:29 compute-0 sudo[103285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuyltvkungmnnpqteashgcdaiszyoswu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020789.0386777-283-92758191325775/AnsiballZ_systemd.py'
Nov 24 21:46:29 compute-0 sudo[103285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:29 compute-0 python3.9[103287]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:46:29 compute-0 systemd[1]: Reloading.
Nov 24 21:46:29 compute-0 systemd-rc-local-generator[103314]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:46:29 compute-0 systemd-sysv-generator[103318]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:46:30 compute-0 sudo[103285]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:30 compute-0 sudo[103474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atklfhuisahivarttvmtxaokgyzgemnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020790.4222033-291-106198300518735/AnsiballZ_stat.py'
Nov 24 21:46:30 compute-0 sudo[103474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:30 compute-0 python3.9[103476]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:31 compute-0 sudo[103474]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:31 compute-0 sudo[103552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfszyuhgabjcpfkrgrdivioiwbhvawhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020790.4222033-291-106198300518735/AnsiballZ_file.py'
Nov 24 21:46:31 compute-0 sudo[103552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:31 compute-0 python3.9[103554]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:46:31 compute-0 sudo[103552]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:32 compute-0 sudo[103704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgaprykakqytbgpqpaguoeprkmszklyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020791.748598-303-246526531393291/AnsiballZ_stat.py'
Nov 24 21:46:32 compute-0 sudo[103704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:32 compute-0 python3.9[103706]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:32 compute-0 sudo[103704]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:32 compute-0 sudo[103782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlsdvmnojvaorkywnnmbtuwcdbvsvxxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020791.748598-303-246526531393291/AnsiballZ_file.py'
Nov 24 21:46:32 compute-0 sudo[103782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:32 compute-0 python3.9[103784]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:46:32 compute-0 sudo[103782]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:33 compute-0 sudo[103934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzijrpavstwxfjbdsrssbdpwmoznktnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020793.1428006-315-156062970178435/AnsiballZ_systemd.py'
Nov 24 21:46:33 compute-0 sudo[103934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:33 compute-0 python3.9[103936]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:46:33 compute-0 systemd[1]: Reloading.
Nov 24 21:46:33 compute-0 systemd-rc-local-generator[103958]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:46:33 compute-0 systemd-sysv-generator[103962]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:46:34 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 21:46:34 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 21:46:34 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 21:46:34 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 21:46:34 compute-0 sudo[103934]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:34 compute-0 sudo[104127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfskrvavnswtouxvauaawawoerpbufvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020794.566323-325-83456545798703/AnsiballZ_file.py'
Nov 24 21:46:34 compute-0 sudo[104127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:35 compute-0 python3.9[104129]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:35 compute-0 sudo[104127]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:35 compute-0 sudo[104279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etsekcmzajdmjjaejpkwwhfhbmyzjrjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020795.385207-333-106501617716035/AnsiballZ_stat.py'
Nov 24 21:46:35 compute-0 sudo[104279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:35 compute-0 python3.9[104281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:36 compute-0 sudo[104279]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:36 compute-0 sudo[104402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igxmqujfaebspmnuqioddexfzfnexifk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020795.385207-333-106501617716035/AnsiballZ_copy.py'
Nov 24 21:46:36 compute-0 sudo[104402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:36 compute-0 python3.9[104404]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764020795.385207-333-106501617716035/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:36 compute-0 sudo[104402]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:37 compute-0 sudo[104554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thrueysfwcyujbmjcmnckgkqewknykoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020797.1400368-350-240128562941327/AnsiballZ_file.py'
Nov 24 21:46:37 compute-0 sudo[104554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:37 compute-0 python3.9[104556]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:46:37 compute-0 sudo[104554]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:38 compute-0 sudo[104706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbwrutmhslphkqujshtaezhvkqaksukc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020798.0291078-358-44857106775503/AnsiballZ_stat.py'
Nov 24 21:46:38 compute-0 sudo[104706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:38 compute-0 python3.9[104708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:46:38 compute-0 sudo[104706]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:39 compute-0 sudo[104829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnbethnaucoqnbdshscrxacnxnmenili ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020798.0291078-358-44857106775503/AnsiballZ_copy.py'
Nov 24 21:46:39 compute-0 sudo[104829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:39 compute-0 python3.9[104831]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764020798.0291078-358-44857106775503/.source.json _original_basename=.nsfg7pqb follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:46:39 compute-0 sudo[104829]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:40 compute-0 sudo[104981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atixelikpskviqqdagzyethgbxmxblqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020799.6445284-373-194862888558119/AnsiballZ_file.py'
Nov 24 21:46:40 compute-0 sudo[104981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:40 compute-0 python3.9[104983]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:46:40 compute-0 sudo[104981]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:40 compute-0 sudo[105133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnywizyvftvylmxwmcrivktdbvfnosnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020800.6041226-381-66906960273270/AnsiballZ_stat.py'
Nov 24 21:46:40 compute-0 sudo[105133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:41 compute-0 sudo[105133]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:41 compute-0 sudo[105256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlnuqzphhuwvvblihewxjgvuofkuctcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020800.6041226-381-66906960273270/AnsiballZ_copy.py'
Nov 24 21:46:41 compute-0 sudo[105256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:41 compute-0 sudo[105256]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:42 compute-0 sudo[105408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waromhjoxqnqtfnwynfyosspisbxhaia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020802.3376975-398-37489078069367/AnsiballZ_container_config_data.py'
Nov 24 21:46:42 compute-0 sudo[105408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:43 compute-0 python3.9[105410]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 24 21:46:43 compute-0 sudo[105408]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:43 compute-0 sudo[105560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmpxdcsoqngmagbfofvfxymeggwcjsoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020803.3438118-407-86324198356967/AnsiballZ_container_config_hash.py'
Nov 24 21:46:43 compute-0 sudo[105560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:44 compute-0 python3.9[105562]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 21:46:44 compute-0 sudo[105560]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:44 compute-0 sudo[105712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmncqcluypjqzhtkmblysxaicszgdqzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020804.338919-416-221650339421213/AnsiballZ_podman_container_info.py'
Nov 24 21:46:44 compute-0 sudo[105712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:45 compute-0 python3.9[105714]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 21:46:45 compute-0 sudo[105712]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:46 compute-0 sudo[105890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptiqssdfvcrbfmhzkwfodncripvgkabb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764020805.7774122-429-133502669809371/AnsiballZ_edpm_container_manage.py'
Nov 24 21:46:46 compute-0 sudo[105890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:46 compute-0 python3[105892]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 21:46:46 compute-0 podman[105927]: 2025-11-24 21:46:46.955609497 +0000 UTC m=+0.091034969 container create fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 24 21:46:46 compute-0 podman[105927]: 2025-11-24 21:46:46.911033301 +0000 UTC m=+0.046458813 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 21:46:46 compute-0 python3[105892]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 21:46:47 compute-0 sudo[105890]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:47 compute-0 sudo[106115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thpxafyjlubhcoszuhvpzdjxdslkggup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020807.4300652-437-280547675202529/AnsiballZ_stat.py'
Nov 24 21:46:47 compute-0 sudo[106115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:48 compute-0 python3.9[106117]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:46:48 compute-0 sudo[106115]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:48 compute-0 sudo[106269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ishkutlmrzjbkrudjyulakfmhkozcvuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020808.3779094-446-114344801205097/AnsiballZ_file.py'
Nov 24 21:46:48 compute-0 sudo[106269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:49 compute-0 python3.9[106271]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:46:49 compute-0 sudo[106269]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:49 compute-0 sudo[106345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygytxcsnywcyewtklddbpngkcqnfshct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020808.3779094-446-114344801205097/AnsiballZ_stat.py'
Nov 24 21:46:49 compute-0 sudo[106345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:49 compute-0 python3.9[106347]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:46:49 compute-0 sudo[106345]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:49 compute-0 podman[106348]: 2025-11-24 21:46:49.566813358 +0000 UTC m=+0.119886068 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:46:50 compute-0 sudo[106523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkaofkpcjxqrbitbecubavelgicarwpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020809.633231-446-133314752126637/AnsiballZ_copy.py'
Nov 24 21:46:50 compute-0 sudo[106523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:50 compute-0 python3.9[106525]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764020809.633231-446-133314752126637/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:46:50 compute-0 sudo[106523]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:50 compute-0 sudo[106599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxahmrmeaqxienvmloxbcasxlljjdoot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020809.633231-446-133314752126637/AnsiballZ_systemd.py'
Nov 24 21:46:50 compute-0 sudo[106599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:51 compute-0 python3.9[106601]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:46:51 compute-0 systemd[1]: Reloading.
Nov 24 21:46:51 compute-0 systemd-rc-local-generator[106630]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:46:51 compute-0 systemd-sysv-generator[106634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:46:51 compute-0 sudo[106599]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:51 compute-0 sudo[106712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naqowobiddmsrxwybivyxnfmkhdmkhmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020809.633231-446-133314752126637/AnsiballZ_systemd.py'
Nov 24 21:46:51 compute-0 sudo[106712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:46:52 compute-0 python3.9[106714]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:46:52 compute-0 systemd[1]: Reloading.
Nov 24 21:46:52 compute-0 systemd-rc-local-generator[106743]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:46:52 compute-0 systemd-sysv-generator[106748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:46:52 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 24 21:46:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afea70d95427b98201a70dbdf8a96e6b0c8a482a1bff7dcb667f13447a58ee2e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 24 21:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afea70d95427b98201a70dbdf8a96e6b0c8a482a1bff7dcb667f13447a58ee2e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 21:46:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6.
Nov 24 21:46:52 compute-0 podman[106755]: 2025-11-24 21:46:52.648553287 +0000 UTC m=+0.165640300 container init fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: + sudo -E kolla_set_configs
Nov 24 21:46:52 compute-0 podman[106755]: 2025-11-24 21:46:52.69166478 +0000 UTC m=+0.208751743 container start fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 24 21:46:52 compute-0 edpm-start-podman-container[106755]: ovn_metadata_agent
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Validating config file
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Copying service configuration files
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Writing out command to execute
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: ++ cat /run_command
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: + CMD=neutron-ovn-metadata-agent
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: + ARGS=
Nov 24 21:46:52 compute-0 edpm-start-podman-container[106754]: Creating additional drop-in dependency for "ovn_metadata_agent" (fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6)
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: + sudo kolla_copy_cacerts
Nov 24 21:46:52 compute-0 podman[106778]: 2025-11-24 21:46:52.801784535 +0000 UTC m=+0.091264756 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: + [[ ! -n '' ]]
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: + . kolla_extend_start
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: Running command: 'neutron-ovn-metadata-agent'
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: + umask 0022
Nov 24 21:46:52 compute-0 ovn_metadata_agent[106771]: + exec neutron-ovn-metadata-agent
Nov 24 21:46:52 compute-0 systemd[1]: Reloading.
Nov 24 21:46:52 compute-0 systemd-rc-local-generator[106851]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:46:52 compute-0 systemd-sysv-generator[106855]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:46:53 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 24 21:46:53 compute-0 sudo[106712]: pam_unix(sudo:session): session closed for user root
Nov 24 21:46:53 compute-0 sshd-session[98492]: Connection closed by 192.168.122.30 port 53696
Nov 24 21:46:53 compute-0 sshd-session[98489]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:46:53 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Nov 24 21:46:53 compute-0 systemd[1]: session-22.scope: Consumed 41.264s CPU time.
Nov 24 21:46:53 compute-0 systemd-logind[806]: Session 22 logged out. Waiting for processes to exit.
Nov 24 21:46:53 compute-0 systemd-logind[806]: Removed session 22.
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.485 106776 INFO neutron.common.config [-] Logging enabled!
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.486 106776 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.486 106776 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.487 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.487 106776 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.487 106776 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.487 106776 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.487 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.488 106776 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.488 106776 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.488 106776 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.488 106776 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.488 106776 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.488 106776 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.489 106776 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.489 106776 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.489 106776 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.489 106776 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.489 106776 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.489 106776 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.490 106776 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.490 106776 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.490 106776 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.490 106776 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.490 106776 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.490 106776 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.490 106776 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.491 106776 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.491 106776 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.491 106776 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.491 106776 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.491 106776 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.491 106776 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.492 106776 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.492 106776 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.492 106776 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.492 106776 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.492 106776 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.493 106776 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.493 106776 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.493 106776 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.493 106776 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.493 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.493 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.493 106776 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.494 106776 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.494 106776 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.494 106776 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.494 106776 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.494 106776 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.494 106776 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.494 106776 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.495 106776 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.495 106776 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.495 106776 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.495 106776 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.495 106776 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.495 106776 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.495 106776 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.496 106776 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.496 106776 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.496 106776 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.496 106776 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.496 106776 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.496 106776 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.497 106776 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.497 106776 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.497 106776 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.497 106776 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.497 106776 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.497 106776 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.498 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.498 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.498 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.498 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.498 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.498 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.498 106776 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.499 106776 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.499 106776 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.499 106776 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.499 106776 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.499 106776 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.499 106776 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.499 106776 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.500 106776 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.500 106776 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.500 106776 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.500 106776 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.500 106776 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.500 106776 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.500 106776 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.501 106776 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.501 106776 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.501 106776 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.501 106776 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.501 106776 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.501 106776 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.501 106776 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.502 106776 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.502 106776 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.502 106776 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.502 106776 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.502 106776 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.502 106776 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.502 106776 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.503 106776 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.503 106776 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.503 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.503 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.503 106776 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.503 106776 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.504 106776 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.504 106776 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.504 106776 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.504 106776 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.504 106776 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.504 106776 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.505 106776 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.505 106776 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.505 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.505 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.505 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.505 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.506 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.506 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.506 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.506 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.506 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.506 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.506 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.507 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.507 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.507 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.507 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.507 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.508 106776 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.508 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.508 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.508 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.508 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.509 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.509 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.509 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.509 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.509 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.510 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.510 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.510 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.510 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.510 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.510 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.510 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.511 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.511 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.511 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.511 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.511 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.511 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.511 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.512 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.512 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.512 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.512 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.512 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.512 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.512 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.513 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.513 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.513 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.513 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.513 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.513 106776 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.514 106776 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.514 106776 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.514 106776 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.514 106776 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.514 106776 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.514 106776 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.514 106776 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.515 106776 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.515 106776 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.515 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.515 106776 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.515 106776 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.515 106776 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.516 106776 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.516 106776 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.516 106776 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.516 106776 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.516 106776 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.516 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.516 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.517 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.517 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.517 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.517 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.517 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.517 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.517 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.518 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.518 106776 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.518 106776 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.518 106776 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.518 106776 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.518 106776 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.518 106776 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.519 106776 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.519 106776 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.519 106776 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.519 106776 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.519 106776 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.519 106776 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.520 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.520 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.520 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.520 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.520 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.520 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.520 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.521 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.521 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.521 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.521 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.521 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.521 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.522 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.522 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.522 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.522 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.522 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.522 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.522 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.523 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.523 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.523 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.523 106776 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.523 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.523 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.524 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.524 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.524 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.524 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.524 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.524 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.525 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.525 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.525 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.525 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.525 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.525 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.525 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.526 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.526 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.526 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.526 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.526 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.526 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.527 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.527 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.527 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.527 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.527 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.527 106776 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.528 106776 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.528 106776 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.528 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.528 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.528 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.528 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.528 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.529 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.529 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.529 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.529 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.529 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.529 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.530 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.530 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.530 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.530 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.530 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.530 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.530 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.531 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.531 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.531 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.531 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.531 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.531 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.531 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.532 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.532 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.532 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.532 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.532 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.532 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.533 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.533 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.533 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.533 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.533 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.533 106776 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.533 106776 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.542 106776 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.543 106776 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.543 106776 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.543 106776 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.543 106776 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.556 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name d2f80616-70e9-484c-836d-1edab81fe5d9 (UUID: d2f80616-70e9-484c-836d-1edab81fe5d9) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.584 106776 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.584 106776 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.584 106776 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.584 106776 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.587 106776 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.594 106776 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.601 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'd2f80616-70e9-484c-836d-1edab81fe5d9'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], external_ids={}, name=d2f80616-70e9-484c-836d-1edab81fe5d9, nb_cfg_timestamp=1764020757454, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.602 106776 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fd6e3883160>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.603 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.603 106776 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.604 106776 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.604 106776 INFO oslo_service.service [-] Starting 1 workers
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.609 106776 DEBUG oslo_service.service [-] Started child 106886 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.613 106776 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp71rpkill/privsep.sock']
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.615 106886 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-163646'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.648 106886 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.649 106886 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.649 106886 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.654 106886 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.662 106886 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 21:46:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:54.671 106886 INFO eventlet.wsgi.server [-] (106886) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 24 21:46:55 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 24 21:46:55 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:55.253 106776 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 21:46:55 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:55.255 106776 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp71rpkill/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 21:46:55 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:55.141 106891 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 21:46:55 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:55.148 106891 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 21:46:55 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:55.152 106891 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 24 21:46:55 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:55.153 106891 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106891
Nov 24 21:46:55 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:55.259 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[0e330a40-0a2a-4603-88f3-9300318994b8]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 21:46:55 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:55.773 106891 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:46:55 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:55.773 106891 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:46:55 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:55.773 106891 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.281 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5eb94d-27a6-42c9-b18a-2d29b31c4115]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.283 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, column=external_ids, values=({'neutron:ovn-metadata-id': '6985740c-377c-5ea6-8047-c02a5f96a760'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.297 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.305 106776 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.305 106776 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.305 106776 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.305 106776 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.305 106776 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.305 106776 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.306 106776 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.306 106776 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.306 106776 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.306 106776 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.306 106776 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.306 106776 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.306 106776 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.307 106776 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.307 106776 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.307 106776 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.307 106776 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.307 106776 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.308 106776 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.308 106776 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.308 106776 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.308 106776 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.308 106776 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.308 106776 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.308 106776 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.309 106776 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.309 106776 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.309 106776 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.309 106776 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.309 106776 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.309 106776 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.310 106776 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.310 106776 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.310 106776 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.310 106776 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.310 106776 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.310 106776 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.311 106776 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.311 106776 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.311 106776 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.311 106776 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.311 106776 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.311 106776 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.312 106776 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.312 106776 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.312 106776 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.312 106776 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.312 106776 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.312 106776 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.312 106776 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.313 106776 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.313 106776 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.313 106776 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.313 106776 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.313 106776 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.313 106776 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.313 106776 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.314 106776 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.314 106776 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.314 106776 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.314 106776 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.314 106776 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.314 106776 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.314 106776 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.315 106776 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.315 106776 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.315 106776 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.315 106776 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.315 106776 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.315 106776 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.315 106776 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.316 106776 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.316 106776 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.316 106776 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.316 106776 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.316 106776 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.316 106776 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.317 106776 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.317 106776 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.317 106776 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.317 106776 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.317 106776 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.318 106776 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.318 106776 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.318 106776 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.318 106776 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.318 106776 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.318 106776 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.319 106776 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.319 106776 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.319 106776 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.319 106776 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.319 106776 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.319 106776 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.320 106776 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.320 106776 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.320 106776 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.320 106776 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.320 106776 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.321 106776 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.321 106776 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.321 106776 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.321 106776 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.321 106776 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.321 106776 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.321 106776 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.322 106776 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.322 106776 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.322 106776 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.322 106776 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.322 106776 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.322 106776 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.323 106776 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.323 106776 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.323 106776 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.323 106776 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.323 106776 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.323 106776 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.324 106776 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.324 106776 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.324 106776 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.324 106776 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.324 106776 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.324 106776 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.324 106776 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.325 106776 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.325 106776 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.325 106776 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.325 106776 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.325 106776 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.325 106776 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.325 106776 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.326 106776 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.326 106776 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.326 106776 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.326 106776 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.326 106776 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.326 106776 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.327 106776 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.327 106776 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.327 106776 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.327 106776 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.327 106776 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.327 106776 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.327 106776 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.328 106776 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.328 106776 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.328 106776 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.328 106776 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.328 106776 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.328 106776 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.329 106776 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.329 106776 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.329 106776 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.329 106776 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.329 106776 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.329 106776 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.329 106776 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.330 106776 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.330 106776 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.330 106776 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.330 106776 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.330 106776 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.330 106776 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.330 106776 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.330 106776 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.331 106776 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.331 106776 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.331 106776 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.331 106776 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.331 106776 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.331 106776 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.332 106776 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.332 106776 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.332 106776 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.332 106776 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.332 106776 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.332 106776 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.332 106776 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.333 106776 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.333 106776 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.333 106776 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.333 106776 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.333 106776 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.333 106776 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.333 106776 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.334 106776 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.334 106776 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.334 106776 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.334 106776 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.334 106776 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.334 106776 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.335 106776 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.335 106776 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.335 106776 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.335 106776 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.335 106776 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.335 106776 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.335 106776 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.336 106776 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.336 106776 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.336 106776 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.336 106776 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.336 106776 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.336 106776 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.336 106776 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.337 106776 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.337 106776 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.337 106776 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.337 106776 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.337 106776 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.337 106776 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.337 106776 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.338 106776 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.338 106776 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.338 106776 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.338 106776 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.338 106776 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.338 106776 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.338 106776 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.339 106776 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.339 106776 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.339 106776 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.339 106776 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.339 106776 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.339 106776 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.339 106776 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.340 106776 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.340 106776 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.340 106776 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.340 106776 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.340 106776 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.340 106776 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.340 106776 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.341 106776 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.341 106776 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.341 106776 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.341 106776 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.341 106776 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.341 106776 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.341 106776 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.342 106776 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.342 106776 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.342 106776 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.342 106776 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.342 106776 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.342 106776 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.342 106776 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.343 106776 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.343 106776 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.343 106776 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.343 106776 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.343 106776 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.343 106776 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.344 106776 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.344 106776 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.344 106776 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.344 106776 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.344 106776 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.344 106776 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.344 106776 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.345 106776 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.345 106776 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.345 106776 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.345 106776 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.345 106776 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.345 106776 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.345 106776 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.346 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.346 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.346 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.346 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.346 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.346 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.347 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.347 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.347 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.347 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.347 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.347 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.347 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.348 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.348 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.348 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.348 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.348 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.348 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.348 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.349 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.349 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.349 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.349 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.349 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.349 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.350 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.350 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.350 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.350 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.350 106776 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.350 106776 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.351 106776 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.351 106776 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.351 106776 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:46:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:46:56.351 106776 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 21:46:59 compute-0 sshd-session[106897]: Accepted publickey for zuul from 192.168.122.30 port 48198 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:46:59 compute-0 systemd-logind[806]: New session 23 of user zuul.
Nov 24 21:46:59 compute-0 systemd[1]: Started Session 23 of User zuul.
Nov 24 21:46:59 compute-0 sshd-session[106897]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:47:01 compute-0 python3.9[107050]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:47:01 compute-0 sshd-session[107051]: Invalid user sol from 193.32.162.145 port 36262
Nov 24 21:47:01 compute-0 sshd-session[107051]: Connection closed by invalid user sol 193.32.162.145 port 36262 [preauth]
Nov 24 21:47:02 compute-0 sudo[107206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glzkjqpoclkmksndbkgvfwuffuuoznfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020821.6794524-34-45898594087015/AnsiballZ_command.py'
Nov 24 21:47:02 compute-0 sudo[107206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:02 compute-0 python3.9[107208]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:47:02 compute-0 sudo[107206]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:03 compute-0 sudo[107371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-relvrdeorlvxasmwdxiphhlruwydcupc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020822.9377122-45-184249361662593/AnsiballZ_systemd_service.py'
Nov 24 21:47:03 compute-0 sudo[107371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:03 compute-0 python3.9[107373]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:47:03 compute-0 systemd[1]: Reloading.
Nov 24 21:47:04 compute-0 systemd-sysv-generator[107398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:47:04 compute-0 systemd-rc-local-generator[107394]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:47:04 compute-0 sudo[107371]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:05 compute-0 python3.9[107558]: ansible-ansible.builtin.service_facts Invoked
Nov 24 21:47:05 compute-0 network[107575]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 21:47:05 compute-0 network[107576]: 'network-scripts' will be removed from distribution in near future.
Nov 24 21:47:05 compute-0 network[107577]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 21:47:11 compute-0 sudo[107836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnvjdbtuommtqqcchiaapygldoreioqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020831.1656291-64-203990795356673/AnsiballZ_systemd_service.py'
Nov 24 21:47:11 compute-0 sudo[107836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:11 compute-0 python3.9[107838]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:47:11 compute-0 sudo[107836]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:12 compute-0 sudo[107989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhxghlqrbutuxvajqlwfkivodswsvkxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020831.9945083-64-120148661109264/AnsiballZ_systemd_service.py'
Nov 24 21:47:12 compute-0 sudo[107989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:12 compute-0 python3.9[107991]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:47:12 compute-0 sudo[107989]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:13 compute-0 sudo[108142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kijjmuwpxgxrklnebybtpcdppymprsiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020832.9146206-64-20795209575135/AnsiballZ_systemd_service.py'
Nov 24 21:47:13 compute-0 sudo[108142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:13 compute-0 python3.9[108144]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:47:13 compute-0 sudo[108142]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:14 compute-0 sudo[108295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eusbomcvtwmldaxupajermauanquwdxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020833.8274794-64-239001608220127/AnsiballZ_systemd_service.py'
Nov 24 21:47:14 compute-0 sudo[108295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:14 compute-0 python3.9[108297]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:47:14 compute-0 sudo[108295]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:15 compute-0 sudo[108448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbuxbphclavpvxudalmbepdgbbxmjrfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020834.727509-64-54590417108270/AnsiballZ_systemd_service.py'
Nov 24 21:47:15 compute-0 sudo[108448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:15 compute-0 python3.9[108450]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:47:16 compute-0 sudo[108448]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:17 compute-0 sudo[108601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nelgdhadcfwxlpnlpilopqzmfjienglv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020836.6525292-64-1362112956238/AnsiballZ_systemd_service.py'
Nov 24 21:47:17 compute-0 sudo[108601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:17 compute-0 python3.9[108603]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:47:17 compute-0 sudo[108601]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:17 compute-0 sudo[108754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siovsmzedlqdiqchgvtenpjtrewnsaug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020837.5117621-64-262980719230802/AnsiballZ_systemd_service.py'
Nov 24 21:47:17 compute-0 sudo[108754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:18 compute-0 python3.9[108756]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:47:18 compute-0 sudo[108754]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:19 compute-0 sudo[108907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atqfqykrkywjmonpjhqxarraybdzyepn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020838.6830142-116-214467416993868/AnsiballZ_file.py'
Nov 24 21:47:19 compute-0 sudo[108907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:19 compute-0 python3.9[108909]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:19 compute-0 sudo[108907]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:20 compute-0 sudo[109069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrqmaprwrhnajodyjykuglgefytlyncg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020839.5891652-116-97955508838092/AnsiballZ_file.py'
Nov 24 21:47:20 compute-0 sudo[109069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:20 compute-0 podman[109033]: 2025-11-24 21:47:20.107920223 +0000 UTC m=+0.132523985 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 24 21:47:20 compute-0 python3.9[109079]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:20 compute-0 sudo[109069]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:20 compute-0 sudo[109239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etkjxurradykbcuvroxdheomyoyxnqqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020840.521237-116-59247819888335/AnsiballZ_file.py'
Nov 24 21:47:20 compute-0 sudo[109239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:21 compute-0 python3.9[109241]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:21 compute-0 sudo[109239]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:21 compute-0 sudo[109391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnqminjnridhszhigktwvzofyrrbecww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020841.2840102-116-275095247787278/AnsiballZ_file.py'
Nov 24 21:47:21 compute-0 sudo[109391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:21 compute-0 python3.9[109393]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:21 compute-0 sudo[109391]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:22 compute-0 sudo[109543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhgyevwhlqmaiymmwlqcucbgjvdbmztv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020842.098165-116-167680761743682/AnsiballZ_file.py'
Nov 24 21:47:22 compute-0 sudo[109543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:22 compute-0 python3.9[109545]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:22 compute-0 sudo[109543]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:23 compute-0 sudo[109705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqdsushrvknncanyoogeijqggsgewtyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020842.7635665-116-97651671519421/AnsiballZ_file.py'
Nov 24 21:47:23 compute-0 sudo[109705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:23 compute-0 podman[109669]: 2025-11-24 21:47:23.182375368 +0000 UTC m=+0.100245296 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 24 21:47:23 compute-0 python3.9[109717]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:23 compute-0 sudo[109705]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:24 compute-0 sudo[109867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwibpsvjepiaqjdlkqyflicwmitvxdvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020843.5590978-116-274017752849378/AnsiballZ_file.py'
Nov 24 21:47:24 compute-0 sudo[109867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:24 compute-0 python3.9[109869]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:24 compute-0 sudo[109867]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:24 compute-0 sudo[110019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohrdzrfjiytvjuylqqstfdyxhmlcagzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020844.5617254-166-161931374306234/AnsiballZ_file.py'
Nov 24 21:47:24 compute-0 sudo[110019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:25 compute-0 python3.9[110021]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:25 compute-0 sudo[110019]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:25 compute-0 sudo[110171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvwthckulvhaxsqvmtwtiwkpepazdmgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020845.3677993-166-121925247194457/AnsiballZ_file.py'
Nov 24 21:47:25 compute-0 sudo[110171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:25 compute-0 python3.9[110173]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:25 compute-0 sudo[110171]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:26 compute-0 sudo[110323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hshpuoonpqavguzwerzmzgjgqhkykgbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020846.1317794-166-160029219358565/AnsiballZ_file.py'
Nov 24 21:47:26 compute-0 sudo[110323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:26 compute-0 python3.9[110325]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:26 compute-0 sudo[110323]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:27 compute-0 sudo[110475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yisxtljxzkqsctlvciscysqugjigvnyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020846.9199724-166-273245365107254/AnsiballZ_file.py'
Nov 24 21:47:27 compute-0 sudo[110475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:27 compute-0 python3.9[110477]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:27 compute-0 sudo[110475]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:28 compute-0 sudo[110627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcignbmmeuqqnfpasdebvwruhemvjvbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020847.5845628-166-208041630978483/AnsiballZ_file.py'
Nov 24 21:47:28 compute-0 sudo[110627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:28 compute-0 python3.9[110629]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:28 compute-0 sudo[110627]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:28 compute-0 sudo[110779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euufncypgjjnersevimxpylvvphqslbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020848.5188055-166-109965140600001/AnsiballZ_file.py'
Nov 24 21:47:28 compute-0 sudo[110779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:29 compute-0 python3.9[110781]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:29 compute-0 sudo[110779]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:29 compute-0 sudo[110931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eghfjibnuivdhwokzhzmbtkkbxvmyifw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020849.3410227-166-278718806755075/AnsiballZ_file.py'
Nov 24 21:47:29 compute-0 sudo[110931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:29 compute-0 python3.9[110933]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:47:29 compute-0 sudo[110931]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:30 compute-0 sudo[111083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqzuxlrbilftcmilaomgaxntndolcgkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020850.2901504-217-275116331629007/AnsiballZ_command.py'
Nov 24 21:47:30 compute-0 sudo[111083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:30 compute-0 python3.9[111085]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:47:30 compute-0 sudo[111083]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:31 compute-0 python3.9[111237]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 21:47:32 compute-0 sudo[111387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cybafblbjabeppscqfksfeyoabbzmrgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020852.1757226-235-114612238499599/AnsiballZ_systemd_service.py'
Nov 24 21:47:32 compute-0 sudo[111387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:32 compute-0 python3.9[111389]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:47:32 compute-0 systemd[1]: Reloading.
Nov 24 21:47:33 compute-0 systemd-sysv-generator[111424]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:47:33 compute-0 systemd-rc-local-generator[111420]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:47:33 compute-0 sudo[111387]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:33 compute-0 sudo[111576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgtwyqbdyweojgitpltxacefirldcgic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020853.4276319-243-89532069373320/AnsiballZ_command.py'
Nov 24 21:47:33 compute-0 sudo[111576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:34 compute-0 python3.9[111578]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:47:34 compute-0 sudo[111576]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:34 compute-0 sudo[111729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiovyvptkxontyarolilrvrcpchghurf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020854.2467582-243-249928362952585/AnsiballZ_command.py'
Nov 24 21:47:34 compute-0 sudo[111729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:34 compute-0 python3.9[111731]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:47:34 compute-0 sudo[111729]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:35 compute-0 sudo[111882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xorfhkvivhlbxqurqbdxwuqbsaphlvno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020855.0591068-243-16572074565689/AnsiballZ_command.py'
Nov 24 21:47:35 compute-0 sudo[111882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:35 compute-0 python3.9[111884]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:47:35 compute-0 sudo[111882]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:36 compute-0 sudo[112035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aocjyopddnmnxnymuwvtntecvozhkfpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020855.8900144-243-29446994535645/AnsiballZ_command.py'
Nov 24 21:47:36 compute-0 sudo[112035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:36 compute-0 python3.9[112037]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:47:36 compute-0 sudo[112035]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:36 compute-0 sudo[112188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykfwpeaiavtfbifplismfohdwyfkyqfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020856.6181238-243-258798886135103/AnsiballZ_command.py'
Nov 24 21:47:36 compute-0 sudo[112188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:37 compute-0 python3.9[112190]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:47:37 compute-0 sudo[112188]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:37 compute-0 sudo[112341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqwzplbokyyljukvzivkyvkioyriypuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020857.379767-243-179657405784558/AnsiballZ_command.py'
Nov 24 21:47:37 compute-0 sudo[112341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:38 compute-0 python3.9[112343]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:47:38 compute-0 sudo[112341]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:38 compute-0 sudo[112494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xffmmypfajqcabefogtpoojrddpfirpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020858.3400245-243-137723465403326/AnsiballZ_command.py'
Nov 24 21:47:38 compute-0 sudo[112494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:38 compute-0 python3.9[112496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:47:38 compute-0 sudo[112494]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:39 compute-0 sudo[112647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjmpsapkygbzbcipqnozjpmerdshphnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020859.3462055-297-265320306965645/AnsiballZ_getent.py'
Nov 24 21:47:39 compute-0 sudo[112647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:40 compute-0 python3.9[112649]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 24 21:47:40 compute-0 sudo[112647]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:40 compute-0 sudo[112800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzbydmvbwcluhetvbjdamzejccssyvuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020860.4142525-305-277615117977411/AnsiballZ_group.py'
Nov 24 21:47:40 compute-0 sudo[112800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:41 compute-0 python3.9[112802]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 21:47:41 compute-0 groupadd[112803]: group added to /etc/group: name=libvirt, GID=42473
Nov 24 21:47:41 compute-0 groupadd[112803]: group added to /etc/gshadow: name=libvirt
Nov 24 21:47:41 compute-0 groupadd[112803]: new group: name=libvirt, GID=42473
Nov 24 21:47:41 compute-0 sudo[112800]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:42 compute-0 sudo[112958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brzgtajbnfcgsvhsqvignxzonjdzvdao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020861.5699604-313-190727142538027/AnsiballZ_user.py'
Nov 24 21:47:42 compute-0 sudo[112958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:42 compute-0 python3.9[112960]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 21:47:42 compute-0 useradd[112962]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 21:47:42 compute-0 sudo[112958]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:43 compute-0 sudo[113118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naeoytehvbhukmadmgjknyxdkynoivjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020862.9465845-324-101336639402009/AnsiballZ_setup.py'
Nov 24 21:47:43 compute-0 sudo[113118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:43 compute-0 python3.9[113120]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:47:43 compute-0 sudo[113118]: pam_unix(sudo:session): session closed for user root
Nov 24 21:47:44 compute-0 sudo[113202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caoytidetdipjbhiglitkknmnfxjitfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020862.9465845-324-101336639402009/AnsiballZ_dnf.py'
Nov 24 21:47:44 compute-0 sudo[113202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:47:44 compute-0 python3.9[113204]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:47:50 compute-0 podman[113216]: 2025-11-24 21:47:50.579960341 +0000 UTC m=+0.128924629 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Nov 24 21:47:53 compute-0 podman[113287]: 2025-11-24 21:47:53.53532764 +0000 UTC m=+0.081754945 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:47:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:47:54.535 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:47:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:47:54.536 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:47:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:47:54.536 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:48:12 compute-0 kernel: SELinux:  Converting 2758 SID table entries...
Nov 24 21:48:12 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 21:48:12 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 21:48:12 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 21:48:12 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 21:48:12 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 21:48:12 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 21:48:12 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 21:48:16 compute-0 sshd-session[113449]: Invalid user sdadmin from 45.148.10.240 port 57002
Nov 24 21:48:16 compute-0 sshd-session[113449]: Connection closed by invalid user sdadmin 45.148.10.240 port 57002 [preauth]
Nov 24 21:48:21 compute-0 kernel: SELinux:  Converting 2758 SID table entries...
Nov 24 21:48:21 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 21:48:21 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 21:48:21 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 21:48:21 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 21:48:21 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 21:48:21 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 21:48:21 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 21:48:21 compute-0 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 24 21:48:21 compute-0 podman[113458]: 2025-11-24 21:48:21.664622157 +0000 UTC m=+0.205966940 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 21:48:24 compute-0 podman[113483]: 2025-11-24 21:48:24.538990127 +0000 UTC m=+0.083286837 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:48:38 compute-0 sshd-session[115731]: Invalid user pi from 80.94.95.115 port 15822
Nov 24 21:48:38 compute-0 sshd-session[115731]: Connection closed by invalid user pi 80.94.95.115 port 15822 [preauth]
Nov 24 21:48:52 compute-0 podman[122880]: 2025-11-24 21:48:52.559632168 +0000 UTC m=+0.118921269 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 21:48:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:48:54.537 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:48:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:48:54.538 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:48:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:48:54.538 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:48:55 compute-0 podman[124295]: 2025-11-24 21:48:55.535677183 +0000 UTC m=+0.087948719 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:49:18 compute-0 kernel: SELinux:  Converting 2759 SID table entries...
Nov 24 21:49:18 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 21:49:18 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 21:49:18 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 21:49:18 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 21:49:18 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 21:49:18 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 21:49:18 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 21:49:20 compute-0 groupadd[130359]: group added to /etc/group: name=dnsmasq, GID=992
Nov 24 21:49:20 compute-0 groupadd[130359]: group added to /etc/gshadow: name=dnsmasq
Nov 24 21:49:20 compute-0 groupadd[130359]: new group: name=dnsmasq, GID=992
Nov 24 21:49:20 compute-0 useradd[130366]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 24 21:49:20 compute-0 dbus-broker-launch[779]: Noticed file-system modification, trigger reload.
Nov 24 21:49:20 compute-0 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 24 21:49:20 compute-0 dbus-broker-launch[779]: Noticed file-system modification, trigger reload.
Nov 24 21:49:21 compute-0 groupadd[130379]: group added to /etc/group: name=clevis, GID=991
Nov 24 21:49:21 compute-0 groupadd[130379]: group added to /etc/gshadow: name=clevis
Nov 24 21:49:21 compute-0 groupadd[130379]: new group: name=clevis, GID=991
Nov 24 21:49:21 compute-0 useradd[130386]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 24 21:49:21 compute-0 usermod[130396]: add 'clevis' to group 'tss'
Nov 24 21:49:21 compute-0 usermod[130396]: add 'clevis' to shadow group 'tss'
Nov 24 21:49:23 compute-0 polkitd[43731]: Reloading rules
Nov 24 21:49:23 compute-0 polkitd[43731]: Collecting garbage unconditionally...
Nov 24 21:49:23 compute-0 polkitd[43731]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 21:49:23 compute-0 polkitd[43731]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 21:49:23 compute-0 polkitd[43731]: Finished loading, compiling and executing 3 rules
Nov 24 21:49:23 compute-0 polkitd[43731]: Reloading rules
Nov 24 21:49:23 compute-0 polkitd[43731]: Collecting garbage unconditionally...
Nov 24 21:49:23 compute-0 polkitd[43731]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 21:49:23 compute-0 polkitd[43731]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 21:49:23 compute-0 polkitd[43731]: Finished loading, compiling and executing 3 rules
Nov 24 21:49:23 compute-0 podman[130417]: 2025-11-24 21:49:23.577691403 +0000 UTC m=+0.132794516 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:49:25 compute-0 groupadd[130610]: group added to /etc/group: name=ceph, GID=167
Nov 24 21:49:25 compute-0 groupadd[130610]: group added to /etc/gshadow: name=ceph
Nov 24 21:49:25 compute-0 groupadd[130610]: new group: name=ceph, GID=167
Nov 24 21:49:25 compute-0 useradd[130616]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 24 21:49:26 compute-0 podman[130623]: 2025-11-24 21:49:26.514737739 +0000 UTC m=+0.067271069 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:49:28 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 24 21:49:28 compute-0 sshd[1010]: Received signal 15; terminating.
Nov 24 21:49:28 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 24 21:49:28 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 24 21:49:28 compute-0 systemd[1]: sshd.service: Consumed 3.690s CPU time, read 32.0K from disk, written 72.0K to disk.
Nov 24 21:49:28 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 24 21:49:28 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 24 21:49:28 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 21:49:28 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 21:49:28 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 21:49:28 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 24 21:49:28 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 24 21:49:28 compute-0 sshd[131154]: Server listening on 0.0.0.0 port 22.
Nov 24 21:49:28 compute-0 sshd[131154]: Server listening on :: port 22.
Nov 24 21:49:28 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 24 21:49:30 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 21:49:30 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 21:49:30 compute-0 systemd[1]: Reloading.
Nov 24 21:49:31 compute-0 systemd-sysv-generator[131414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:49:31 compute-0 systemd-rc-local-generator[131411]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:49:31 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 21:49:33 compute-0 sudo[113202]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:34 compute-0 sudo[134588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkcdhleineynwseoxokgumcwsjguwmsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020973.7436903-336-172661963353387/AnsiballZ_systemd.py'
Nov 24 21:49:34 compute-0 sudo[134588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:34 compute-0 python3.9[134610]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 21:49:34 compute-0 systemd[1]: Reloading.
Nov 24 21:49:34 compute-0 systemd-rc-local-generator[135020]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:49:34 compute-0 systemd-sysv-generator[135024]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:49:35 compute-0 sudo[134588]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:35 compute-0 sudo[135775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxzqqytzxhiisczxngnddbxisjllknxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020975.3428972-336-115422223756372/AnsiballZ_systemd.py'
Nov 24 21:49:35 compute-0 sudo[135775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:35 compute-0 python3.9[135800]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 21:49:36 compute-0 systemd[1]: Reloading.
Nov 24 21:49:36 compute-0 systemd-sysv-generator[136169]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:49:36 compute-0 systemd-rc-local-generator[136163]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:49:36 compute-0 sudo[135775]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:36 compute-0 sudo[136908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddxoydqhpaqleseajfhdpbggnlduwdtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020976.5003958-336-252108789986774/AnsiballZ_systemd.py'
Nov 24 21:49:36 compute-0 sudo[136908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:37 compute-0 python3.9[136923]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 21:49:37 compute-0 systemd[1]: Reloading.
Nov 24 21:49:37 compute-0 systemd-rc-local-generator[137293]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:49:37 compute-0 systemd-sysv-generator[137297]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:49:37 compute-0 sudo[136908]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:38 compute-0 sudo[138027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doyalsfeiyvstumbttcharzpgzagylyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020977.7650537-336-12649877940987/AnsiballZ_systemd.py'
Nov 24 21:49:38 compute-0 sudo[138027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:38 compute-0 python3.9[138053]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 21:49:38 compute-0 systemd[1]: Reloading.
Nov 24 21:49:38 compute-0 systemd-sysv-generator[138412]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:49:38 compute-0 systemd-rc-local-generator[138405]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:49:38 compute-0 sudo[138027]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:39 compute-0 sudo[139229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfuctultugdnnsgoeubbfgpjnbpgpofa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020979.0758336-365-222280424335666/AnsiballZ_systemd.py'
Nov 24 21:49:39 compute-0 sudo[139229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:39 compute-0 python3.9[139249]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:39 compute-0 systemd[1]: Reloading.
Nov 24 21:49:39 compute-0 systemd-rc-local-generator[139714]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:49:39 compute-0 systemd-sysv-generator[139719]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:49:40 compute-0 sudo[139229]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:40 compute-0 sudo[140523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iryfcnxmrkskawxkokrmmwnigagqgywv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020980.3349202-365-206741721423001/AnsiballZ_systemd.py'
Nov 24 21:49:40 compute-0 sudo[140523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:40 compute-0 python3.9[140547]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:41 compute-0 systemd[1]: Reloading.
Nov 24 21:49:41 compute-0 systemd-rc-local-generator[140925]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:49:41 compute-0 systemd-sysv-generator[140932]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:49:41 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 21:49:41 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 21:49:41 compute-0 systemd[1]: man-db-cache-update.service: Consumed 13.320s CPU time.
Nov 24 21:49:41 compute-0 systemd[1]: run-r8cfa4b9813654f988e6e645cc2d18d6d.service: Deactivated successfully.
Nov 24 21:49:41 compute-0 sudo[140523]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:41 compute-0 sudo[141086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udscnheuxautnxnpypfpyegqjhabfcjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020981.539637-365-221833672324216/AnsiballZ_systemd.py'
Nov 24 21:49:41 compute-0 sudo[141086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:42 compute-0 python3.9[141088]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:42 compute-0 systemd[1]: Reloading.
Nov 24 21:49:42 compute-0 systemd-rc-local-generator[141118]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:49:42 compute-0 systemd-sysv-generator[141123]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:49:42 compute-0 sudo[141086]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:43 compute-0 sudo[141276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgqduhsrqvospzccpkusloperlgucdvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020982.8821201-365-165804885806982/AnsiballZ_systemd.py'
Nov 24 21:49:43 compute-0 sudo[141276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:43 compute-0 python3.9[141278]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:43 compute-0 sudo[141276]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:44 compute-0 sudo[141431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlayycxmcenmzdkogmmbrsytyniweaky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020983.9247189-365-2825343574117/AnsiballZ_systemd.py'
Nov 24 21:49:44 compute-0 sudo[141431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:44 compute-0 python3.9[141433]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:44 compute-0 systemd[1]: Reloading.
Nov 24 21:49:44 compute-0 systemd-rc-local-generator[141462]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:49:44 compute-0 systemd-sysv-generator[141468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:49:45 compute-0 sudo[141431]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:45 compute-0 sudo[141621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehcygowzcquuzdqcepvzdqzqvlfuirlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020985.2808266-401-155427409991739/AnsiballZ_systemd.py'
Nov 24 21:49:45 compute-0 sudo[141621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:45 compute-0 python3.9[141623]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 21:49:46 compute-0 systemd[1]: Reloading.
Nov 24 21:49:46 compute-0 systemd-rc-local-generator[141651]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:49:46 compute-0 systemd-sysv-generator[141659]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:49:46 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 24 21:49:46 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 24 21:49:46 compute-0 sudo[141621]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:46 compute-0 sudo[141814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiacxwqqlwisjhcomnttgfrxccgyjwmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020986.534851-409-230300392396659/AnsiballZ_systemd.py'
Nov 24 21:49:46 compute-0 sudo[141814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:47 compute-0 python3.9[141816]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:48 compute-0 sudo[141814]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:48 compute-0 sudo[141969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rktaqisyqweijinpujvdamkpmhvusqbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020988.5508513-409-248851480134259/AnsiballZ_systemd.py'
Nov 24 21:49:48 compute-0 sudo[141969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:49 compute-0 python3.9[141971]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:49 compute-0 sudo[141969]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:49 compute-0 sudo[142124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqvdagtxywpvzkpdevmfqwfoqiysoumc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020989.4885538-409-106445276038920/AnsiballZ_systemd.py'
Nov 24 21:49:49 compute-0 sudo[142124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:50 compute-0 python3.9[142126]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:50 compute-0 sudo[142124]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:50 compute-0 sudo[142279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybgbsipjprpjmkkhjanfgfpuyhgpaloq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020990.589699-409-66227132118197/AnsiballZ_systemd.py'
Nov 24 21:49:50 compute-0 sudo[142279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:51 compute-0 python3.9[142281]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:51 compute-0 sudo[142279]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:51 compute-0 sudo[142434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srubkzchonruvftyftnnoieqbzujrplc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020991.5512028-409-67966622578542/AnsiballZ_systemd.py'
Nov 24 21:49:51 compute-0 sudo[142434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:52 compute-0 python3.9[142436]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:52 compute-0 sudo[142434]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:52 compute-0 sudo[142589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfamtlnaposgccpdlpfovvygyczbwreg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020992.5615404-409-24384927869913/AnsiballZ_systemd.py'
Nov 24 21:49:52 compute-0 sudo[142589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:53 compute-0 python3.9[142591]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:53 compute-0 sudo[142589]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:54 compute-0 sudo[142754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qydfwebtphvqonfpvspzqomrcmjnsyqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020993.592957-409-216681928027582/AnsiballZ_systemd.py'
Nov 24 21:49:54 compute-0 sudo[142754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:54 compute-0 podman[142718]: 2025-11-24 21:49:54.112626363 +0000 UTC m=+0.148478738 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 21:49:54 compute-0 python3.9[142762]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:54 compute-0 sudo[142754]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:49:54.538 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:49:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:49:54.539 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:49:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:49:54.539 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:49:55 compute-0 sudo[142924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmmfvicsqxqjxjqcqrqpzshfepogfwdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020994.6623406-409-117810342932399/AnsiballZ_systemd.py'
Nov 24 21:49:55 compute-0 sudo[142924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:55 compute-0 python3.9[142926]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:55 compute-0 sudo[142924]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:56 compute-0 sudo[143079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eriofjgozefadyyawrwcvylfyypntbfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020995.585573-409-170770166874667/AnsiballZ_systemd.py'
Nov 24 21:49:56 compute-0 sudo[143079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:56 compute-0 python3.9[143081]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:56 compute-0 sudo[143079]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:57 compute-0 sudo[143245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrlfaqxkekdheinqysooouveoiinncyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020996.6757286-409-235675326570999/AnsiballZ_systemd.py'
Nov 24 21:49:57 compute-0 sudo[143245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:57 compute-0 podman[143208]: 2025-11-24 21:49:57.12181327 +0000 UTC m=+0.090765549 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 24 21:49:57 compute-0 python3.9[143255]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:57 compute-0 sudo[143245]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:58 compute-0 sudo[143408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzyxkoloxerprcpxoqnibvwrvtqacixl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020997.7261543-409-122367712872187/AnsiballZ_systemd.py'
Nov 24 21:49:58 compute-0 sudo[143408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:58 compute-0 python3.9[143410]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:58 compute-0 sudo[143408]: pam_unix(sudo:session): session closed for user root
Nov 24 21:49:59 compute-0 sudo[143563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gekmubwscdmkavcohvxnfacuqqecrgpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764020998.7681363-409-259970960488990/AnsiballZ_systemd.py'
Nov 24 21:49:59 compute-0 sudo[143563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:49:59 compute-0 python3.9[143565]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:49:59 compute-0 sudo[143563]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:00 compute-0 sudo[143718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjgwpnkxxntizudfkdjpfrhendtnhqye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021000.1548197-409-31876675872324/AnsiballZ_systemd.py'
Nov 24 21:50:00 compute-0 sudo[143718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:00 compute-0 python3.9[143720]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:50:00 compute-0 sudo[143718]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:01 compute-0 sudo[143873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pegsbafjtgwqjzfemntgnpezqnxgibrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021001.1613493-409-88666412725913/AnsiballZ_systemd.py'
Nov 24 21:50:01 compute-0 sudo[143873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:01 compute-0 python3.9[143875]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 21:50:01 compute-0 sudo[143873]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:02 compute-0 sudo[144028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjnatowsxiaennwkstbhjuwcqauwgkst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021002.4041188-511-244812795774578/AnsiballZ_file.py'
Nov 24 21:50:02 compute-0 sudo[144028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:03 compute-0 python3.9[144030]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:50:03 compute-0 sudo[144028]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:03 compute-0 sudo[144180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkybzpyhqbvyojrkbpwqjzvmiqevpxux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021003.2317996-511-134252549163751/AnsiballZ_file.py'
Nov 24 21:50:03 compute-0 sudo[144180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:03 compute-0 python3.9[144182]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:50:03 compute-0 sudo[144180]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:04 compute-0 sudo[144332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kakmjycsxtwfjfrycwtoocgnqfobrgsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021004.067336-511-155443384695569/AnsiballZ_file.py'
Nov 24 21:50:04 compute-0 sudo[144332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:04 compute-0 python3.9[144334]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:50:04 compute-0 sudo[144332]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:05 compute-0 sudo[144484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlokpvmeihympdvnpbupvftgfvidrtzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021004.8815198-511-277261263928670/AnsiballZ_file.py'
Nov 24 21:50:05 compute-0 sudo[144484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:05 compute-0 python3.9[144486]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:50:05 compute-0 sudo[144484]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:06 compute-0 sudo[144636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tehoddweexpkqxrbhfuubfxgtrszixzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021005.7186818-511-121157121368556/AnsiballZ_file.py'
Nov 24 21:50:06 compute-0 sudo[144636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:06 compute-0 python3.9[144638]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:50:06 compute-0 sudo[144636]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:06 compute-0 sudo[144788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjsawhlmzjfxuwxdybnlrzjgzncwkxku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021006.5227585-511-29895050445038/AnsiballZ_file.py'
Nov 24 21:50:06 compute-0 sudo[144788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:07 compute-0 python3.9[144790]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:50:07 compute-0 sudo[144788]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:08 compute-0 sudo[144940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxwwuzvqzuaykdiktfrtetbgksiyzuew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021007.4126623-554-133532061302187/AnsiballZ_stat.py'
Nov 24 21:50:08 compute-0 sudo[144940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:08 compute-0 python3.9[144942]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:08 compute-0 sudo[144940]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:08 compute-0 sudo[145065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbbruwhrrjaywylogeatckxqqbymuits ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021007.4126623-554-133532061302187/AnsiballZ_copy.py'
Nov 24 21:50:08 compute-0 sudo[145065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:09 compute-0 python3.9[145067]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764021007.4126623-554-133532061302187/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:09 compute-0 sudo[145065]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:09 compute-0 sudo[145217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybxgtkloepojqcrimttatpvtuxrbmzyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021009.340307-554-132314097845372/AnsiballZ_stat.py'
Nov 24 21:50:09 compute-0 sudo[145217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:09 compute-0 python3.9[145219]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:10 compute-0 sudo[145217]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:10 compute-0 sudo[145342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umxbqddhxrczhtyhttykfoxhsloyqibl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021009.340307-554-132314097845372/AnsiballZ_copy.py'
Nov 24 21:50:10 compute-0 sudo[145342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:10 compute-0 python3.9[145344]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764021009.340307-554-132314097845372/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:10 compute-0 sudo[145342]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:11 compute-0 sudo[145494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftrsqufsjxugxkwpatykgqiixsdkmzzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021010.941701-554-267917955793309/AnsiballZ_stat.py'
Nov 24 21:50:11 compute-0 sudo[145494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:11 compute-0 python3.9[145496]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:11 compute-0 sudo[145494]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:11 compute-0 sudo[145619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tksoysspcktwvwjuubsklzywivyxbndw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021010.941701-554-267917955793309/AnsiballZ_copy.py'
Nov 24 21:50:11 compute-0 sudo[145619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:12 compute-0 python3.9[145621]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764021010.941701-554-267917955793309/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:12 compute-0 sudo[145619]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:12 compute-0 sudo[145771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meyjbxxmdyhakeebbkcilkhjuthukueu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021012.4377077-554-6232910133929/AnsiballZ_stat.py'
Nov 24 21:50:12 compute-0 sudo[145771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:13 compute-0 python3.9[145773]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:13 compute-0 sudo[145771]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:13 compute-0 sudo[145896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jquvbijiiduecytdxdggoqzpanwsuxai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021012.4377077-554-6232910133929/AnsiballZ_copy.py'
Nov 24 21:50:13 compute-0 sudo[145896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:13 compute-0 python3.9[145898]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764021012.4377077-554-6232910133929/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:13 compute-0 sudo[145896]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:14 compute-0 sudo[146048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytnziablkapjnhfhaannhjknmkevucpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021013.9630063-554-53955267892568/AnsiballZ_stat.py'
Nov 24 21:50:14 compute-0 sudo[146048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:14 compute-0 python3.9[146050]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:14 compute-0 sudo[146048]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:15 compute-0 sudo[146173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgpxoqvckjuisgjxdgjfkdarmhqpqkrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021013.9630063-554-53955267892568/AnsiballZ_copy.py'
Nov 24 21:50:15 compute-0 sudo[146173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:15 compute-0 python3.9[146175]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764021013.9630063-554-53955267892568/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:15 compute-0 sudo[146173]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:15 compute-0 sudo[146325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkrbrogdvwekvobjxoybnndfhjormsne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021015.4780452-554-53562121941694/AnsiballZ_stat.py'
Nov 24 21:50:15 compute-0 sudo[146325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:16 compute-0 python3.9[146327]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:16 compute-0 sudo[146325]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:16 compute-0 sudo[146452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adsazdhkaryktczvfsdrhimmbufajxej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021015.4780452-554-53562121941694/AnsiballZ_copy.py'
Nov 24 21:50:16 compute-0 sudo[146452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:16 compute-0 sshd-session[146330]: Invalid user solana from 193.32.162.145 port 46954
Nov 24 21:50:16 compute-0 sshd-session[146330]: Connection closed by invalid user solana 193.32.162.145 port 46954 [preauth]
Nov 24 21:50:16 compute-0 python3.9[146454]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764021015.4780452-554-53562121941694/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:16 compute-0 sudo[146452]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:17 compute-0 sudo[146604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikvgwhbqjvyhejqvxxelmpjqncfphypf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021017.0832367-554-108105841922039/AnsiballZ_stat.py'
Nov 24 21:50:17 compute-0 sudo[146604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:17 compute-0 python3.9[146606]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:17 compute-0 sudo[146604]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:18 compute-0 sudo[146727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrfxnnnvtjfsdipffasphegsllrnidcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021017.0832367-554-108105841922039/AnsiballZ_copy.py'
Nov 24 21:50:18 compute-0 sudo[146727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:18 compute-0 python3.9[146729]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764021017.0832367-554-108105841922039/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:18 compute-0 sudo[146727]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:18 compute-0 sudo[146879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofgrrbrsnrfgrhzetcpaysjfixllieio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021018.5837283-554-263205844792270/AnsiballZ_stat.py'
Nov 24 21:50:18 compute-0 sudo[146879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:19 compute-0 python3.9[146881]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:19 compute-0 sudo[146879]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:19 compute-0 sudo[147004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipyffipyhpefrsroxxvazvuhwtjqhogc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021018.5837283-554-263205844792270/AnsiballZ_copy.py'
Nov 24 21:50:19 compute-0 sudo[147004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:19 compute-0 python3.9[147006]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764021018.5837283-554-263205844792270/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:19 compute-0 sudo[147004]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:20 compute-0 sudo[147156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hypkiyapmbzmijwzlrbrxufbccyrdaxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021020.1352522-667-148114548248286/AnsiballZ_command.py'
Nov 24 21:50:20 compute-0 sudo[147156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:20 compute-0 python3.9[147158]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 24 21:50:20 compute-0 sudo[147156]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:21 compute-0 sudo[147309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdbssikdsxunjeviqolukytzzujaivpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021021.053606-676-143690334246020/AnsiballZ_file.py'
Nov 24 21:50:21 compute-0 sudo[147309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:21 compute-0 python3.9[147311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:21 compute-0 sudo[147309]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:21 compute-0 sshd-session[147312]: Invalid user admin from 45.148.10.240 port 45316
Nov 24 21:50:21 compute-0 sshd-session[147312]: Connection closed by invalid user admin 45.148.10.240 port 45316 [preauth]
Nov 24 21:50:22 compute-0 sudo[147463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utnowlulljxbbtzvzzskcxrbnefuwcxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021021.8943207-676-128821315687433/AnsiballZ_file.py'
Nov 24 21:50:22 compute-0 sudo[147463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:22 compute-0 python3.9[147465]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:22 compute-0 sudo[147463]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:23 compute-0 sudo[147615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoyixasgonnglnpdelomlclfxsfdxjxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021022.7141612-676-78028070406797/AnsiballZ_file.py'
Nov 24 21:50:23 compute-0 sudo[147615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:23 compute-0 python3.9[147617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:23 compute-0 sudo[147615]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:23 compute-0 sudo[147767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndrkcpcxsglorgqpomnkzsuktumioana ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021023.4992778-676-228546783925237/AnsiballZ_file.py'
Nov 24 21:50:23 compute-0 sudo[147767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:24 compute-0 python3.9[147769]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:24 compute-0 sudo[147767]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:24 compute-0 podman[147868]: 2025-11-24 21:50:24.595733408 +0000 UTC m=+0.138602383 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 24 21:50:24 compute-0 sudo[147945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enugresylgfofskgwvbffwomkrtlugob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021024.2570815-676-122289825061139/AnsiballZ_file.py'
Nov 24 21:50:24 compute-0 sudo[147945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:24 compute-0 python3.9[147947]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:24 compute-0 sudo[147945]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:25 compute-0 sudo[148097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilyedgsovcaxyvjirpkcysbxsbvlddwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021025.0526173-676-194638354547030/AnsiballZ_file.py'
Nov 24 21:50:25 compute-0 sudo[148097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:25 compute-0 python3.9[148099]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:25 compute-0 sudo[148097]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:26 compute-0 sudo[148249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cteplpctfnyeiyjfbzfmuyxmyevbfubt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021025.8861818-676-8406858012275/AnsiballZ_file.py'
Nov 24 21:50:26 compute-0 sudo[148249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:26 compute-0 python3.9[148251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:26 compute-0 sudo[148249]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:27 compute-0 sudo[148401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alkiftwlkgqueigrwtiwbuflvhxztxqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021026.689923-676-89251937211997/AnsiballZ_file.py'
Nov 24 21:50:27 compute-0 sudo[148401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:27 compute-0 python3.9[148403]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:27 compute-0 sudo[148401]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:27 compute-0 podman[148429]: 2025-11-24 21:50:27.53797747 +0000 UTC m=+0.082563659 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 21:50:27 compute-0 sudo[148573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwpjuixfbiojmkwleasrydxzkntoqoce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021027.4754722-676-14303806862943/AnsiballZ_file.py'
Nov 24 21:50:27 compute-0 sudo[148573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:28 compute-0 python3.9[148575]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:28 compute-0 sudo[148573]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:28 compute-0 sudo[148725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtrhyituszqkvgrmrfzrmuvekfglkktw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021028.2779067-676-266469340684021/AnsiballZ_file.py'
Nov 24 21:50:28 compute-0 sudo[148725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:28 compute-0 python3.9[148727]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:28 compute-0 sudo[148725]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:29 compute-0 sudo[148877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vamjnapdpybajevciskuwgreprnsjpty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021029.054761-676-16305073469776/AnsiballZ_file.py'
Nov 24 21:50:29 compute-0 sudo[148877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:29 compute-0 python3.9[148879]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:29 compute-0 sudo[148877]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:30 compute-0 sudo[149029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhvjbxykjbluvwipqftgxkngjcmjieiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021029.9040396-676-12137845056985/AnsiballZ_file.py'
Nov 24 21:50:30 compute-0 sudo[149029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:30 compute-0 python3.9[149031]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:30 compute-0 sudo[149029]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:31 compute-0 sudo[149181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulklfwbpxagvsccdpralggsoxkmaifhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021030.7181356-676-181012228666922/AnsiballZ_file.py'
Nov 24 21:50:31 compute-0 sudo[149181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:31 compute-0 python3.9[149183]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:31 compute-0 sudo[149181]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:31 compute-0 sudo[149333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viorkuqoynccvqieshvkipmjetsdtxnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021031.477009-676-265724162075510/AnsiballZ_file.py'
Nov 24 21:50:31 compute-0 sudo[149333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:32 compute-0 python3.9[149335]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:32 compute-0 sudo[149333]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:32 compute-0 sudo[149485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wustyrhffzwzxpbeowfulsgdokrtjueu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021032.344039-775-22228919169048/AnsiballZ_stat.py'
Nov 24 21:50:32 compute-0 sudo[149485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:32 compute-0 python3.9[149487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:32 compute-0 sudo[149485]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:33 compute-0 sudo[149608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xijzscxkegvjjypffbpuieypfwyjlmqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021032.344039-775-22228919169048/AnsiballZ_copy.py'
Nov 24 21:50:33 compute-0 sudo[149608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:33 compute-0 python3.9[149610]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021032.344039-775-22228919169048/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:33 compute-0 sudo[149608]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:34 compute-0 sudo[149760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thfitfsxpylupqsgxcgqsyykclgwgysu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021033.779792-775-94396453657308/AnsiballZ_stat.py'
Nov 24 21:50:34 compute-0 sudo[149760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:34 compute-0 python3.9[149762]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:34 compute-0 sudo[149760]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:34 compute-0 sudo[149883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrsuigqvuuexvgjtwdaivdfjiyqrzkpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021033.779792-775-94396453657308/AnsiballZ_copy.py'
Nov 24 21:50:34 compute-0 sudo[149883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:34 compute-0 python3.9[149885]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021033.779792-775-94396453657308/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:35 compute-0 sudo[149883]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:35 compute-0 sudo[150035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxcyanmnhmbgbpdzgkjlxohfsbtodjzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021035.1933198-775-148755554201195/AnsiballZ_stat.py'
Nov 24 21:50:35 compute-0 sudo[150035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:35 compute-0 python3.9[150037]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:35 compute-0 sudo[150035]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:36 compute-0 sudo[150158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upxnwtzxyjmffzmyxpmfrixiyetmprom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021035.1933198-775-148755554201195/AnsiballZ_copy.py'
Nov 24 21:50:36 compute-0 sudo[150158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:36 compute-0 python3.9[150160]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021035.1933198-775-148755554201195/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:36 compute-0 sudo[150158]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:37 compute-0 sudo[150310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otoltbjpzmruqjqzqwbygghkyftzvrji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021036.584515-775-245562359934101/AnsiballZ_stat.py'
Nov 24 21:50:37 compute-0 sudo[150310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:37 compute-0 python3.9[150312]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:37 compute-0 sudo[150310]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:37 compute-0 sudo[150433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwbvmuaezpuzmiygenkhmnrxijwpvlcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021036.584515-775-245562359934101/AnsiballZ_copy.py'
Nov 24 21:50:37 compute-0 sudo[150433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:37 compute-0 python3.9[150435]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021036.584515-775-245562359934101/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:38 compute-0 sudo[150433]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:38 compute-0 sudo[150585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dydtypvshqsgupuxwpwhpvizqbmcahxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021038.2074747-775-63867293765847/AnsiballZ_stat.py'
Nov 24 21:50:38 compute-0 sudo[150585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:38 compute-0 python3.9[150587]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:38 compute-0 sudo[150585]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:39 compute-0 sudo[150708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqsbnrbtcnwnuoeexxmqeevszeuskndv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021038.2074747-775-63867293765847/AnsiballZ_copy.py'
Nov 24 21:50:39 compute-0 sudo[150708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:39 compute-0 python3.9[150710]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021038.2074747-775-63867293765847/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:39 compute-0 sudo[150708]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:40 compute-0 sudo[150860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrvmgamotdvncymhkgfsshhlvgslbndb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021039.7397313-775-141579250679410/AnsiballZ_stat.py'
Nov 24 21:50:40 compute-0 sudo[150860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:40 compute-0 python3.9[150862]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:40 compute-0 sudo[150860]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:40 compute-0 sudo[150983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kblqctgszkyhkksnvoxkshgernjowrdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021039.7397313-775-141579250679410/AnsiballZ_copy.py'
Nov 24 21:50:40 compute-0 sudo[150983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:41 compute-0 python3.9[150985]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021039.7397313-775-141579250679410/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:41 compute-0 sudo[150983]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:41 compute-0 sudo[151135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhksappiogarlynulygparycmdlkxsxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021041.2705214-775-290203570372/AnsiballZ_stat.py'
Nov 24 21:50:41 compute-0 sudo[151135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:41 compute-0 python3.9[151137]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:41 compute-0 sudo[151135]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:42 compute-0 sudo[151258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvqeuisreyazayacdaaqstuqvlwldxcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021041.2705214-775-290203570372/AnsiballZ_copy.py'
Nov 24 21:50:42 compute-0 sudo[151258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:42 compute-0 python3.9[151260]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021041.2705214-775-290203570372/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:42 compute-0 sudo[151258]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:43 compute-0 sudo[151410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qktqlziqkpvwhsqtrrmvhucbnhbizkhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021042.7399163-775-95882679537350/AnsiballZ_stat.py'
Nov 24 21:50:43 compute-0 sudo[151410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:43 compute-0 python3.9[151412]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:43 compute-0 sudo[151410]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:43 compute-0 sudo[151533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djdjkgpxmllmlartmhnpvtbvxzadyyyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021042.7399163-775-95882679537350/AnsiballZ_copy.py'
Nov 24 21:50:43 compute-0 sudo[151533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:44 compute-0 python3.9[151535]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021042.7399163-775-95882679537350/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:44 compute-0 sudo[151533]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:44 compute-0 sudo[151685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhgfhhojfboizdpecmmhpbifiopxpaeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021044.354087-775-165844975836662/AnsiballZ_stat.py'
Nov 24 21:50:44 compute-0 sudo[151685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:44 compute-0 python3.9[151687]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:44 compute-0 sudo[151685]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:45 compute-0 sudo[151808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-targzlujkqrkxzeqidibzpyyyxyxzjky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021044.354087-775-165844975836662/AnsiballZ_copy.py'
Nov 24 21:50:45 compute-0 sudo[151808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:45 compute-0 python3.9[151810]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021044.354087-775-165844975836662/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:45 compute-0 sudo[151808]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:46 compute-0 sudo[151960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gftblsqiwbgbrliulipixiawggktbndk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021045.833492-775-266211840523030/AnsiballZ_stat.py'
Nov 24 21:50:46 compute-0 sudo[151960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:46 compute-0 python3.9[151962]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:46 compute-0 sudo[151960]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:46 compute-0 sudo[152083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azdgatktwcgxberxirgxaahcotkutulq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021045.833492-775-266211840523030/AnsiballZ_copy.py'
Nov 24 21:50:46 compute-0 sudo[152083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:47 compute-0 python3.9[152085]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021045.833492-775-266211840523030/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:47 compute-0 sudo[152083]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:47 compute-0 sudo[152235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bliaphcjwjekgoqgoupdbuzelcljyejp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021047.3140507-775-118310608754049/AnsiballZ_stat.py'
Nov 24 21:50:47 compute-0 sudo[152235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:47 compute-0 python3.9[152237]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:47 compute-0 sudo[152235]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:48 compute-0 sudo[152358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcuqrzuxoaricrvsnnvuwcdtpccxewez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021047.3140507-775-118310608754049/AnsiballZ_copy.py'
Nov 24 21:50:48 compute-0 sudo[152358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:48 compute-0 python3.9[152360]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021047.3140507-775-118310608754049/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:48 compute-0 sudo[152358]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:49 compute-0 sudo[152510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxowjjatrwpuesmjfgwzagdvbhjmoips ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021048.7942863-775-240349688290194/AnsiballZ_stat.py'
Nov 24 21:50:49 compute-0 sudo[152510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:49 compute-0 python3.9[152512]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:49 compute-0 sudo[152510]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:49 compute-0 sudo[152633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdboodxebdywhzwwunnnodtqtsmzjlru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021048.7942863-775-240349688290194/AnsiballZ_copy.py'
Nov 24 21:50:49 compute-0 sudo[152633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:50 compute-0 python3.9[152635]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021048.7942863-775-240349688290194/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:50 compute-0 sudo[152633]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:50 compute-0 sudo[152785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekpntgkjokvzbaxwbmrbbclsudmzhjno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021050.3726625-775-263472077324894/AnsiballZ_stat.py'
Nov 24 21:50:50 compute-0 sudo[152785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:50 compute-0 python3.9[152787]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:50 compute-0 sudo[152785]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:51 compute-0 sudo[152908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnujkpnxtvyxjmlnwbdikluaqcikzmoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021050.3726625-775-263472077324894/AnsiballZ_copy.py'
Nov 24 21:50:51 compute-0 sudo[152908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:51 compute-0 python3.9[152910]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021050.3726625-775-263472077324894/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:51 compute-0 sudo[152908]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:52 compute-0 sudo[153060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clyepqjqspfvfshgbewctamhmyudmtjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021051.752154-775-246458124455723/AnsiballZ_stat.py'
Nov 24 21:50:52 compute-0 sudo[153060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:52 compute-0 python3.9[153062]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:50:52 compute-0 sudo[153060]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:52 compute-0 sudo[153183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unwawttxicpdzwnzgnckjupxblykdgro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021051.752154-775-246458124455723/AnsiballZ_copy.py'
Nov 24 21:50:52 compute-0 sudo[153183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:52 compute-0 python3.9[153185]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021051.752154-775-246458124455723/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:52 compute-0 sudo[153183]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:53 compute-0 python3.9[153335]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:50:54 compute-0 sudo[153488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojkoxpwbncknybzdfrmhcimfdavvndbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021053.997834-981-99994459742511/AnsiballZ_seboolean.py'
Nov 24 21:50:54 compute-0 sudo[153488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:50:54.540 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:50:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:50:54.540 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:50:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:50:54.541 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:50:54 compute-0 python3.9[153490]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 24 21:50:55 compute-0 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 24 21:50:55 compute-0 podman[153493]: 2025-11-24 21:50:55.589875835 +0000 UTC m=+0.133154327 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:50:55 compute-0 sudo[153488]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:56 compute-0 sudo[153670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmmdqtgovsdkjzczlydydbhphzwllbgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021056.0291343-989-170651223831490/AnsiballZ_copy.py'
Nov 24 21:50:56 compute-0 sudo[153670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:56 compute-0 python3.9[153672]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:56 compute-0 sudo[153670]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:57 compute-0 sudo[153822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrppmmyvgjcegnpnxjnozlxlryknixjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021056.829614-989-267914738336422/AnsiballZ_copy.py'
Nov 24 21:50:57 compute-0 sudo[153822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:57 compute-0 python3.9[153824]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:57 compute-0 sudo[153822]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:58 compute-0 sudo[153988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhsarhubxmxixobarktwejeyizjbigsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021057.5868862-989-58477586557465/AnsiballZ_copy.py'
Nov 24 21:50:58 compute-0 sudo[153988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:58 compute-0 podman[153948]: 2025-11-24 21:50:58.009167101 +0000 UTC m=+0.082230888 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:50:58 compute-0 python3.9[153995]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:58 compute-0 sudo[153988]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:58 compute-0 sudo[154145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rajgspjatogrbzetidegsjozvabajugs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021058.3981633-989-271813249252255/AnsiballZ_copy.py'
Nov 24 21:50:58 compute-0 sudo[154145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:59 compute-0 python3.9[154147]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:59 compute-0 sudo[154145]: pam_unix(sudo:session): session closed for user root
Nov 24 21:50:59 compute-0 sudo[154297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwftfptlpeiehbpivxnhylycdhvdnscq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021059.241106-989-120715968862539/AnsiballZ_copy.py'
Nov 24 21:50:59 compute-0 sudo[154297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:50:59 compute-0 python3.9[154299]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:50:59 compute-0 sudo[154297]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:00 compute-0 sudo[154449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnnftuphsdbstrkffnvorfwzruwnisvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021060.1028805-1025-23818190688034/AnsiballZ_copy.py'
Nov 24 21:51:00 compute-0 sudo[154449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:00 compute-0 python3.9[154451]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:00 compute-0 sudo[154449]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:01 compute-0 sudo[154601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlhxgfiuvaqaaxeahrpgcqiwnqvkjrgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021060.917098-1025-60506592921196/AnsiballZ_copy.py'
Nov 24 21:51:01 compute-0 sudo[154601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:01 compute-0 python3.9[154603]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:01 compute-0 sudo[154601]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:01 compute-0 sudo[154753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaflktbpvmyxfktwnpnytmucwgpbbpzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021061.6497767-1025-75562284356637/AnsiballZ_copy.py'
Nov 24 21:51:01 compute-0 sudo[154753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:02 compute-0 python3.9[154755]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:02 compute-0 sudo[154753]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:02 compute-0 sudo[154905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wloonkhzsivzzlzlhycjfpnzhqjhggiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021062.3999395-1025-199776944023987/AnsiballZ_copy.py'
Nov 24 21:51:02 compute-0 sudo[154905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:02 compute-0 python3.9[154907]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:02 compute-0 sudo[154905]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:03 compute-0 sudo[155057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkxsmhnbnckvurniphmpilphfaisvetr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021063.111518-1025-193534689146080/AnsiballZ_copy.py'
Nov 24 21:51:03 compute-0 sudo[155057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:03 compute-0 python3.9[155059]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:03 compute-0 sudo[155057]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:04 compute-0 sudo[155209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryytyjztrlfidengswfexffwqfaqjbog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021063.8798068-1061-219980390408643/AnsiballZ_systemd.py'
Nov 24 21:51:04 compute-0 sudo[155209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:04 compute-0 python3.9[155211]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:51:04 compute-0 systemd[1]: Reloading.
Nov 24 21:51:04 compute-0 systemd-rc-local-generator[155241]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:51:04 compute-0 systemd-sysv-generator[155245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:51:04 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 24 21:51:04 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 24 21:51:04 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 24 21:51:04 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 24 21:51:04 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 24 21:51:04 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 24 21:51:04 compute-0 sudo[155209]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:05 compute-0 sudo[155404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gldgthizkusctxakfcqegjmwkyxudoaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021065.133318-1061-170953699817080/AnsiballZ_systemd.py'
Nov 24 21:51:05 compute-0 sudo[155404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:05 compute-0 python3.9[155406]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:51:05 compute-0 systemd[1]: Reloading.
Nov 24 21:51:05 compute-0 systemd-sysv-generator[155434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:51:05 compute-0 systemd-rc-local-generator[155429]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:51:06 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 24 21:51:06 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 24 21:51:06 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 24 21:51:06 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 24 21:51:06 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 24 21:51:06 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 24 21:51:06 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 24 21:51:06 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 21:51:06 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 24 21:51:06 compute-0 sudo[155404]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:06 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 24 21:51:06 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 24 21:51:06 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 24 21:51:06 compute-0 sudo[155628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wswudgempnvatbjhvhivnveuelprvbqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021066.5204606-1061-272946991321963/AnsiballZ_systemd.py'
Nov 24 21:51:06 compute-0 sudo[155628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:07 compute-0 python3.9[155630]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:51:07 compute-0 systemd[1]: Reloading.
Nov 24 21:51:07 compute-0 systemd-rc-local-generator[155661]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:51:07 compute-0 systemd-sysv-generator[155666]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:51:07 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 24 21:51:07 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 24 21:51:07 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 24 21:51:07 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 24 21:51:07 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 24 21:51:07 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 24 21:51:07 compute-0 sudo[155628]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:07 compute-0 setroubleshoot[155443]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 60c991ee-2cfb-40cb-bddf-f6e34ed0b9bf
Nov 24 21:51:07 compute-0 setroubleshoot[155443]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 24 21:51:07 compute-0 setroubleshoot[155443]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 60c991ee-2cfb-40cb-bddf-f6e34ed0b9bf
Nov 24 21:51:07 compute-0 setroubleshoot[155443]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 24 21:51:08 compute-0 sudo[155842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naxgjzjpfkxckdsofbaczvjpjqncnzib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021067.8892326-1061-272736853478628/AnsiballZ_systemd.py'
Nov 24 21:51:08 compute-0 sudo[155842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:08 compute-0 python3.9[155844]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:51:08 compute-0 systemd[1]: Reloading.
Nov 24 21:51:08 compute-0 systemd-rc-local-generator[155866]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:51:08 compute-0 systemd-sysv-generator[155870]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:51:08 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 24 21:51:08 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 24 21:51:08 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 24 21:51:08 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 24 21:51:08 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 24 21:51:08 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 24 21:51:08 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 24 21:51:08 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 24 21:51:08 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 24 21:51:08 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 24 21:51:08 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 21:51:09 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 24 21:51:09 compute-0 sudo[155842]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:09 compute-0 sudo[156057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iltbkdfeapmjvfkavgeauofhqgluyyjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021069.2000632-1061-182788628730460/AnsiballZ_systemd.py'
Nov 24 21:51:09 compute-0 sudo[156057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:09 compute-0 python3.9[156059]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:51:09 compute-0 systemd[1]: Reloading.
Nov 24 21:51:10 compute-0 systemd-rc-local-generator[156082]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:51:10 compute-0 systemd-sysv-generator[156089]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:51:10 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 24 21:51:10 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 24 21:51:10 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 24 21:51:10 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 24 21:51:10 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 24 21:51:10 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 24 21:51:10 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 24 21:51:10 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 24 21:51:10 compute-0 sudo[156057]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:11 compute-0 sudo[156268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxrnemeldhdbzgfcdpijadobizxyugyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021070.699964-1098-46817166958186/AnsiballZ_file.py'
Nov 24 21:51:11 compute-0 sudo[156268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:11 compute-0 python3.9[156270]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:11 compute-0 sudo[156268]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:11 compute-0 sudo[156420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vptlenvtjdyexemgtlvzvixqmvlhtswg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021071.513313-1106-94168034245865/AnsiballZ_find.py'
Nov 24 21:51:11 compute-0 sudo[156420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:12 compute-0 python3.9[156422]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 21:51:12 compute-0 sudo[156420]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:12 compute-0 sudo[156572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euejkurpsnsohroyzdvrvetcqrtnkhgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021072.609184-1120-128886565314070/AnsiballZ_stat.py'
Nov 24 21:51:12 compute-0 sudo[156572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:13 compute-0 python3.9[156574]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:51:13 compute-0 sudo[156572]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:13 compute-0 sudo[156695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qulnaovfdqyaalvexedzobyttzmdzsoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021072.609184-1120-128886565314070/AnsiballZ_copy.py'
Nov 24 21:51:13 compute-0 sudo[156695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:13 compute-0 python3.9[156697]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021072.609184-1120-128886565314070/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:13 compute-0 sudo[156695]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:14 compute-0 sudo[156847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feziymvwsztuujopgumcyerxihxljbwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021074.2211282-1136-110164902622443/AnsiballZ_file.py'
Nov 24 21:51:14 compute-0 sudo[156847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:14 compute-0 python3.9[156849]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:14 compute-0 sudo[156847]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:15 compute-0 sudo[156999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnfdcirwadudkpcxgiglsanztkpnygsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021075.0002854-1144-229327659667475/AnsiballZ_stat.py'
Nov 24 21:51:15 compute-0 sudo[156999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:15 compute-0 python3.9[157001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:51:15 compute-0 sudo[156999]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:15 compute-0 sudo[157077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjmnqtqkhcuseevutqzhellhgmsiivrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021075.0002854-1144-229327659667475/AnsiballZ_file.py'
Nov 24 21:51:15 compute-0 sudo[157077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:16 compute-0 python3.9[157079]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:16 compute-0 sudo[157077]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:16 compute-0 sudo[157229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqyqmvmaacopuavtdgspmqciigdogwoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021076.302183-1156-174525466296722/AnsiballZ_stat.py'
Nov 24 21:51:16 compute-0 sudo[157229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:16 compute-0 python3.9[157231]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:51:16 compute-0 sudo[157229]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:17 compute-0 sudo[157307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agkrksoofcjwibgicnxmhwlesduncxou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021076.302183-1156-174525466296722/AnsiballZ_file.py'
Nov 24 21:51:17 compute-0 sudo[157307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:17 compute-0 python3.9[157309]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.3nzovrix recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:17 compute-0 sudo[157307]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:17 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 24 21:51:17 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 24 21:51:18 compute-0 sudo[157460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrrulfmqaqclhceykcplfuzcybgvsuwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021077.7951279-1168-273984264257520/AnsiballZ_stat.py'
Nov 24 21:51:18 compute-0 sudo[157460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:18 compute-0 python3.9[157462]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:51:18 compute-0 sudo[157460]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:18 compute-0 sudo[157538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovbbzxjhvyrwerrhysoqsskxigjthhif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021077.7951279-1168-273984264257520/AnsiballZ_file.py'
Nov 24 21:51:18 compute-0 sudo[157538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:18 compute-0 python3.9[157540]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:18 compute-0 sudo[157538]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:19 compute-0 sudo[157690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhelfuequupiafjuwsjtxfheditpyhdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021079.2465491-1181-216255559936062/AnsiballZ_command.py'
Nov 24 21:51:19 compute-0 sudo[157690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:19 compute-0 python3.9[157692]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:51:19 compute-0 sudo[157690]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:20 compute-0 sudo[157843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeunjxggdvztqwrsgyiynoebgwwrcnmn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021080.0923529-1189-56626690582629/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 21:51:20 compute-0 sudo[157843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:20 compute-0 python3[157845]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 21:51:20 compute-0 sudo[157843]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:21 compute-0 sudo[157996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvxulqhvgltdrpfkywybfjwrdrgtzgzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021081.0551784-1197-132844840608016/AnsiballZ_stat.py'
Nov 24 21:51:21 compute-0 sudo[157996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:21 compute-0 python3.9[157998]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:51:21 compute-0 sudo[157996]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:21 compute-0 sudo[158074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuzsuwedynjsfsgzsfkgvbmblyxkalmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021081.0551784-1197-132844840608016/AnsiballZ_file.py'
Nov 24 21:51:21 compute-0 sudo[158074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:22 compute-0 python3.9[158076]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:22 compute-0 sudo[158074]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:22 compute-0 sudo[158226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjiruqrqzlootuokdmvupczfidojewkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021082.3639977-1209-240675823389562/AnsiballZ_stat.py'
Nov 24 21:51:22 compute-0 sudo[158226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:23 compute-0 python3.9[158228]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:51:23 compute-0 sudo[158226]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:23 compute-0 sudo[158304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyoaenskxhuiwghhynlkxbfzolyptsxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021082.3639977-1209-240675823389562/AnsiballZ_file.py'
Nov 24 21:51:23 compute-0 sudo[158304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:23 compute-0 python3.9[158306]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:23 compute-0 sudo[158304]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:24 compute-0 sudo[158456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnyqekllgtvmpcxzqbclwjzjrshdjuly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021083.8506427-1221-93643533680580/AnsiballZ_stat.py'
Nov 24 21:51:24 compute-0 sudo[158456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:24 compute-0 python3.9[158458]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:51:24 compute-0 sudo[158456]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:24 compute-0 sudo[158534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iatrlujalnbnoqszypjmbtcxqtduyuui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021083.8506427-1221-93643533680580/AnsiballZ_file.py'
Nov 24 21:51:24 compute-0 sudo[158534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:25 compute-0 python3.9[158536]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:25 compute-0 sudo[158534]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:25 compute-0 sudo[158698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utqezdbnqexwvyouxzutgbfauftyqdbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021085.2879674-1233-243784553390754/AnsiballZ_stat.py'
Nov 24 21:51:25 compute-0 sudo[158698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:25 compute-0 podman[158660]: 2025-11-24 21:51:25.838794964 +0000 UTC m=+0.157199872 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 24 21:51:25 compute-0 python3.9[158704]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:51:26 compute-0 sudo[158698]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:26 compute-0 sudo[158790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idnqhszzucmxloaxqvogaikdgpfsafwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021085.2879674-1233-243784553390754/AnsiballZ_file.py'
Nov 24 21:51:26 compute-0 sudo[158790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:26 compute-0 python3.9[158792]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:26 compute-0 sudo[158790]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:27 compute-0 sudo[158942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkvzuksqicovaddpvilpslzohcirqfav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021086.847431-1245-4143756022857/AnsiballZ_stat.py'
Nov 24 21:51:27 compute-0 sudo[158942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:27 compute-0 python3.9[158944]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:51:27 compute-0 sudo[158942]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:28 compute-0 sudo[159067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmxfptjfgmnvmhukhxyezflxxtdpdhel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021086.847431-1245-4143756022857/AnsiballZ_copy.py'
Nov 24 21:51:28 compute-0 sudo[159067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:28 compute-0 podman[159069]: 2025-11-24 21:51:28.200447305 +0000 UTC m=+0.084372163 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:51:28 compute-0 python3.9[159070]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021086.847431-1245-4143756022857/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:28 compute-0 sudo[159067]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:28 compute-0 sudo[159238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sexitfllbwsgzrjkvnzzblhmitibrdha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021088.5802476-1260-159190078667453/AnsiballZ_file.py'
Nov 24 21:51:28 compute-0 sudo[159238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:29 compute-0 python3.9[159240]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:29 compute-0 sudo[159238]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:29 compute-0 sudo[159390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzowanckfvghtjkurmuhlpnsljfkdqsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021089.4314132-1268-11593146954931/AnsiballZ_command.py'
Nov 24 21:51:29 compute-0 sudo[159390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:30 compute-0 python3.9[159392]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:51:30 compute-0 sudo[159390]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:30 compute-0 sudo[159545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naydpqldffpilugrmcyeliwkeadxsdus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021090.2955742-1276-1399241260464/AnsiballZ_blockinfile.py'
Nov 24 21:51:30 compute-0 sudo[159545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:31 compute-0 python3.9[159547]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:31 compute-0 sudo[159545]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:31 compute-0 sudo[159697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxztobdaklesjwrrgwqbkphbkdbefckx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021091.4035914-1285-232562428682755/AnsiballZ_command.py'
Nov 24 21:51:31 compute-0 sudo[159697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:32 compute-0 python3.9[159699]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:51:32 compute-0 sudo[159697]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:32 compute-0 sudo[159850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzhxrxctysacctxkmfpyjkvalavnsjcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021092.3428998-1293-158849962805577/AnsiballZ_stat.py'
Nov 24 21:51:32 compute-0 sudo[159850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:32 compute-0 python3.9[159852]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:51:32 compute-0 sudo[159850]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:33 compute-0 sudo[160004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyshohnrariudxvryxstqtzvjozrcodn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021093.259925-1301-58886805608454/AnsiballZ_command.py'
Nov 24 21:51:33 compute-0 sudo[160004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:33 compute-0 python3.9[160006]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:51:33 compute-0 sudo[160004]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:34 compute-0 sudo[160159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkcviccwhhuaanfswdeqpbdaaksqipaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021094.1387868-1309-240413692410267/AnsiballZ_file.py'
Nov 24 21:51:34 compute-0 sudo[160159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:34 compute-0 python3.9[160161]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:34 compute-0 sudo[160159]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:35 compute-0 sudo[160311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqstlazgcmkswqyqbwdtwqmeklvyytbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021094.977343-1317-63093500477578/AnsiballZ_stat.py'
Nov 24 21:51:35 compute-0 sudo[160311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:35 compute-0 python3.9[160313]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:51:35 compute-0 sudo[160311]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:36 compute-0 sudo[160434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-endeuduavkxtpjllxwmujoldweylwptt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021094.977343-1317-63093500477578/AnsiballZ_copy.py'
Nov 24 21:51:36 compute-0 sudo[160434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:36 compute-0 python3.9[160436]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021094.977343-1317-63093500477578/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:36 compute-0 sudo[160434]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:36 compute-0 sudo[160588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wismurxbbvalwpihoiusextykdzenrhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021096.567131-1332-5781871556558/AnsiballZ_stat.py'
Nov 24 21:51:36 compute-0 sudo[160588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:37 compute-0 sshd-session[160561]: error: kex_exchange_identification: read: Connection reset by peer
Nov 24 21:51:37 compute-0 sshd-session[160561]: Connection reset by 45.140.17.97 port 42478
Nov 24 21:51:37 compute-0 python3.9[160590]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:51:37 compute-0 sudo[160588]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:37 compute-0 sudo[160711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhfrdpyagljzscgvvokgzwhzzdrsjovo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021096.567131-1332-5781871556558/AnsiballZ_copy.py'
Nov 24 21:51:37 compute-0 sudo[160711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:37 compute-0 python3.9[160713]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021096.567131-1332-5781871556558/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:37 compute-0 sudo[160711]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:38 compute-0 sudo[160863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpzvfsjxdkrupehezuajmuxwetrpvyht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021098.1817863-1347-116717105090973/AnsiballZ_stat.py'
Nov 24 21:51:38 compute-0 sudo[160863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:38 compute-0 python3.9[160865]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:51:38 compute-0 sudo[160863]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:39 compute-0 sudo[160986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dinuegfunmuddxujabgfpcbsewcacjhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021098.1817863-1347-116717105090973/AnsiballZ_copy.py'
Nov 24 21:51:39 compute-0 sudo[160986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:39 compute-0 python3.9[160988]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021098.1817863-1347-116717105090973/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:51:39 compute-0 sudo[160986]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:40 compute-0 sudo[161138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjrimonhnguelhgguoadljlvdinxvfjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021099.7151172-1362-131205982397834/AnsiballZ_systemd.py'
Nov 24 21:51:40 compute-0 sudo[161138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:40 compute-0 python3.9[161140]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:51:40 compute-0 systemd[1]: Reloading.
Nov 24 21:51:40 compute-0 systemd-sysv-generator[161166]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:51:40 compute-0 systemd-rc-local-generator[161160]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:51:40 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 24 21:51:40 compute-0 sudo[161138]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:41 compute-0 sudo[161330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hguxvvibdpvceoxxixdbsvysyeztncpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021100.957629-1370-36175597051049/AnsiballZ_systemd.py'
Nov 24 21:51:41 compute-0 sudo[161330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:41 compute-0 python3.9[161332]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 21:51:41 compute-0 systemd[1]: Reloading.
Nov 24 21:51:41 compute-0 systemd-rc-local-generator[161361]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:51:41 compute-0 systemd-sysv-generator[161365]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:51:42 compute-0 systemd[1]: Reloading.
Nov 24 21:51:42 compute-0 systemd-sysv-generator[161403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:51:42 compute-0 systemd-rc-local-generator[161398]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:51:42 compute-0 sudo[161330]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:42 compute-0 sshd-session[106900]: Connection closed by 192.168.122.30 port 48198
Nov 24 21:51:42 compute-0 sshd-session[106897]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:51:42 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Nov 24 21:51:42 compute-0 systemd[1]: session-23.scope: Consumed 3min 53.334s CPU time.
Nov 24 21:51:42 compute-0 systemd-logind[806]: Session 23 logged out. Waiting for processes to exit.
Nov 24 21:51:42 compute-0 systemd-logind[806]: Removed session 23.
Nov 24 21:51:48 compute-0 sshd-session[161429]: Accepted publickey for zuul from 192.168.122.30 port 39358 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:51:48 compute-0 systemd-logind[806]: New session 24 of user zuul.
Nov 24 21:51:48 compute-0 systemd[1]: Started Session 24 of User zuul.
Nov 24 21:51:48 compute-0 sshd-session[161429]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:51:49 compute-0 python3.9[161582]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:51:50 compute-0 python3.9[161736]: ansible-ansible.builtin.service_facts Invoked
Nov 24 21:51:51 compute-0 network[161753]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 21:51:51 compute-0 network[161754]: 'network-scripts' will be removed from distribution in near future.
Nov 24 21:51:51 compute-0 network[161755]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 21:51:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:51:54.541 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:51:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:51:54.542 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:51:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:51:54.542 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:51:55 compute-0 sudo[162024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwamykrdysnmsltmxsdzenwkqlkvaqjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021115.4080722-47-38227199070076/AnsiballZ_setup.py'
Nov 24 21:51:55 compute-0 sudo[162024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:56 compute-0 python3.9[162026]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 21:51:56 compute-0 sudo[162024]: pam_unix(sudo:session): session closed for user root
Nov 24 21:51:56 compute-0 podman[162032]: 2025-11-24 21:51:56.528112452 +0000 UTC m=+0.089419873 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:51:56 compute-0 sudo[162134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvhdespzssuavvquwfhulogqwhnhwkbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021115.4080722-47-38227199070076/AnsiballZ_dnf.py'
Nov 24 21:51:56 compute-0 sudo[162134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:51:57 compute-0 python3.9[162136]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:51:58 compute-0 podman[162138]: 2025-11-24 21:51:58.508506099 +0000 UTC m=+0.075208336 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:52:01 compute-0 sudo[162134]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:02 compute-0 sudo[162307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edywejzzhyskbugnifspcbjlxzyfcucl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021122.1633859-59-71957731835749/AnsiballZ_stat.py'
Nov 24 21:52:02 compute-0 sudo[162307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:02 compute-0 python3.9[162309]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:52:02 compute-0 sudo[162307]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:03 compute-0 sudo[162459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iefirdmhwtodrrrztvsrqecmqzjumsvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021123.1949716-69-150296288927880/AnsiballZ_command.py'
Nov 24 21:52:03 compute-0 sudo[162459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:03 compute-0 python3.9[162461]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:52:03 compute-0 sudo[162459]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:04 compute-0 sudo[162612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-minnnlnvddjvfqllmwipknyncnoqzjfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021124.240873-79-263936226770544/AnsiballZ_stat.py'
Nov 24 21:52:04 compute-0 sudo[162612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:04 compute-0 python3.9[162614]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:52:04 compute-0 sudo[162612]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:05 compute-0 sudo[162764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-innxdmrgojuneyupnucpwyoxwfwpnpjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021125.117357-87-4762632199330/AnsiballZ_command.py'
Nov 24 21:52:05 compute-0 sudo[162764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:05 compute-0 python3.9[162766]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:52:05 compute-0 sudo[162764]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:06 compute-0 sudo[162917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgqehnutkjmqqztflylxttmyidpomtnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021125.8895845-95-152083567956846/AnsiballZ_stat.py'
Nov 24 21:52:06 compute-0 sudo[162917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:06 compute-0 python3.9[162919]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:52:06 compute-0 sudo[162917]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:07 compute-0 sudo[163040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vztpjcdncefafbvjaxglwiioefiqmney ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021125.8895845-95-152083567956846/AnsiballZ_copy.py'
Nov 24 21:52:07 compute-0 sudo[163040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:07 compute-0 python3.9[163042]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021125.8895845-95-152083567956846/.source.iscsi _original_basename=.7oq53vxd follow=False checksum=a9e8e58bca6ddfd21811439c5e1843717f4988b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:07 compute-0 sudo[163040]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:08 compute-0 sudo[163192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzquiisliyffohmcwhnauvsewdfllozf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021127.601836-110-106598347487715/AnsiballZ_file.py'
Nov 24 21:52:08 compute-0 sudo[163192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:08 compute-0 python3.9[163194]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:08 compute-0 sudo[163192]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:09 compute-0 sudo[163344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsfpzttrczwdivxgsnkjxwsnryrcssov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021128.5810056-118-211931582568286/AnsiballZ_lineinfile.py'
Nov 24 21:52:09 compute-0 sudo[163344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:09 compute-0 python3.9[163346]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:09 compute-0 sudo[163344]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:09 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:52:09 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:52:10 compute-0 sudo[163497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujegvxljenvyhlbgdyevfarritdiixpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021129.5767539-127-29814780135310/AnsiballZ_systemd_service.py'
Nov 24 21:52:10 compute-0 sudo[163497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:10 compute-0 python3.9[163499]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:52:10 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 24 21:52:10 compute-0 sudo[163497]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:11 compute-0 sudo[163653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bghmdbihogplgziucxrhsdjvbzcjhstt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021131.0279372-135-225749529616686/AnsiballZ_systemd_service.py'
Nov 24 21:52:11 compute-0 sudo[163653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:11 compute-0 python3.9[163655]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:52:11 compute-0 systemd[1]: Reloading.
Nov 24 21:52:11 compute-0 systemd-sysv-generator[163689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:52:11 compute-0 systemd-rc-local-generator[163686]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:52:12 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 21:52:12 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 24 21:52:12 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 24 21:52:12 compute-0 systemd[1]: Started Open-iSCSI.
Nov 24 21:52:12 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 24 21:52:12 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 24 21:52:12 compute-0 sudo[163653]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:12 compute-0 sudo[163853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvgqylhzmjmnhqrzavbvervpssuodiqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021132.605457-146-14479809204552/AnsiballZ_service_facts.py'
Nov 24 21:52:12 compute-0 sudo[163853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:13 compute-0 python3.9[163855]: ansible-ansible.builtin.service_facts Invoked
Nov 24 21:52:13 compute-0 network[163872]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 21:52:13 compute-0 network[163873]: 'network-scripts' will be removed from distribution in near future.
Nov 24 21:52:13 compute-0 network[163874]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 21:52:18 compute-0 sudo[163853]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:18 compute-0 sudo[164143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtifmeewnzzuvnlbjqgcizilxkfeshlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021138.5399077-156-166159362543740/AnsiballZ_file.py'
Nov 24 21:52:18 compute-0 sudo[164143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:19 compute-0 python3.9[164145]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 21:52:19 compute-0 sudo[164143]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:20 compute-0 sudo[164295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbqzxvqrzjzlxbzewnifynwwwmitscpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021139.5768273-164-123884972458228/AnsiballZ_modprobe.py'
Nov 24 21:52:20 compute-0 sudo[164295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:20 compute-0 python3.9[164297]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 24 21:52:20 compute-0 sudo[164295]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:20 compute-0 sudo[164451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjcypzkdxjtamhfnoashyuekcfaxtley ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021140.5740862-172-118187045332782/AnsiballZ_stat.py'
Nov 24 21:52:20 compute-0 sudo[164451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:21 compute-0 python3.9[164453]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:52:21 compute-0 sudo[164451]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:21 compute-0 sudo[164574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyvbcrhkgjovstapquugimirtlvgchna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021140.5740862-172-118187045332782/AnsiballZ_copy.py'
Nov 24 21:52:21 compute-0 sudo[164574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:21 compute-0 python3.9[164576]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021140.5740862-172-118187045332782/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:21 compute-0 sudo[164574]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:22 compute-0 sudo[164726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouefuvykrmmtrgpquscnqwdmvqlbvmxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021142.1071184-188-142043325684707/AnsiballZ_lineinfile.py'
Nov 24 21:52:22 compute-0 sudo[164726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:22 compute-0 python3.9[164728]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:22 compute-0 sudo[164726]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:23 compute-0 sudo[164880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hirvtdlrvvetagyvtemvwpfdfsshagjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021142.8759887-196-61264141553643/AnsiballZ_systemd.py'
Nov 24 21:52:23 compute-0 sudo[164880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:23 compute-0 python3.9[164882]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:52:23 compute-0 sshd-session[164862]: Invalid user admin from 45.148.10.240 port 50692
Nov 24 21:52:23 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 21:52:23 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 24 21:52:23 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 24 21:52:23 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 24 21:52:23 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 24 21:52:24 compute-0 sudo[164880]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:24 compute-0 sshd-session[164862]: Connection closed by invalid user admin 45.148.10.240 port 50692 [preauth]
Nov 24 21:52:24 compute-0 sudo[165036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjyattpxsndmnoitwomfnidzbypmgqlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021144.2421114-204-49473151269636/AnsiballZ_file.py'
Nov 24 21:52:24 compute-0 sudo[165036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:24 compute-0 python3.9[165038]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:52:24 compute-0 sudo[165036]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:25 compute-0 sudo[165188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tllydbcuqmtkacrftyuebeeceyjqzrpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021145.2093606-213-218753186034686/AnsiballZ_stat.py'
Nov 24 21:52:25 compute-0 sudo[165188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:25 compute-0 python3.9[165190]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:52:25 compute-0 sudo[165188]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:26 compute-0 sudo[165340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnrytwmpaywzzdqtcoonxmmhuoloqwta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021146.1192358-222-227169166455946/AnsiballZ_stat.py'
Nov 24 21:52:26 compute-0 sudo[165340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:26 compute-0 python3.9[165342]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:52:26 compute-0 sudo[165340]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:27 compute-0 sudo[165503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yztomdmexvlbfdoiqgnwmfpubbzonuxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021146.9580588-230-158139914912518/AnsiballZ_stat.py'
Nov 24 21:52:27 compute-0 sudo[165503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:27 compute-0 podman[165466]: 2025-11-24 21:52:27.423403815 +0000 UTC m=+0.179371010 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 21:52:27 compute-0 python3.9[165511]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:52:27 compute-0 sudo[165503]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:28 compute-0 sudo[165639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifxpghauucafmcibyjqwlbduzsfncumu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021146.9580588-230-158139914912518/AnsiballZ_copy.py'
Nov 24 21:52:28 compute-0 sudo[165639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:28 compute-0 python3.9[165641]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021146.9580588-230-158139914912518/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:28 compute-0 sudo[165639]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:28 compute-0 podman[165765]: 2025-11-24 21:52:28.935144776 +0000 UTC m=+0.072827798 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:52:28 compute-0 sudo[165806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhhfpbaawygdwkmhjhoabollhaoaywol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021148.508495-245-210272521236383/AnsiballZ_command.py'
Nov 24 21:52:28 compute-0 sudo[165806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:29 compute-0 python3.9[165812]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:52:29 compute-0 sudo[165806]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:29 compute-0 sudo[165963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siokzzvpnmgpyjiqyfnbdrgjakiewszm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021149.3877888-253-212811682945812/AnsiballZ_lineinfile.py'
Nov 24 21:52:29 compute-0 sudo[165963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:29 compute-0 python3.9[165965]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:30 compute-0 sudo[165963]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:30 compute-0 sudo[166115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvrjsautnexpgxdvtcohcwczshfgxhqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021150.2540245-261-131056480043704/AnsiballZ_replace.py'
Nov 24 21:52:30 compute-0 sudo[166115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:31 compute-0 python3.9[166117]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:31 compute-0 sudo[166115]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:31 compute-0 sudo[166267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phgboisrjccuuixfwepspuyjjwczdsic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021151.2720816-269-275272450835475/AnsiballZ_replace.py'
Nov 24 21:52:31 compute-0 sudo[166267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:31 compute-0 python3.9[166269]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:31 compute-0 sudo[166267]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:32 compute-0 sudo[166419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-famecmoganusoscqequpafpkmkmvkxoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021152.1808593-278-195296680665632/AnsiballZ_lineinfile.py'
Nov 24 21:52:32 compute-0 sudo[166419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:32 compute-0 python3.9[166421]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:32 compute-0 sudo[166419]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:33 compute-0 sudo[166571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylirdjdqcdryfpuprhnimkppbtyyrisv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021153.0022259-278-232543527034763/AnsiballZ_lineinfile.py'
Nov 24 21:52:33 compute-0 sudo[166571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:33 compute-0 python3.9[166573]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:33 compute-0 sudo[166571]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:34 compute-0 sudo[166723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmhqwcfhotdqcxgiycadoeqrpjhmgbob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021153.8007877-278-169538855943842/AnsiballZ_lineinfile.py'
Nov 24 21:52:34 compute-0 sudo[166723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:34 compute-0 python3.9[166725]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:34 compute-0 sudo[166723]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:34 compute-0 sudo[166875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qskzgjfnzvshmkozrxgedttluwapcdzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021154.4609504-278-49233428782597/AnsiballZ_lineinfile.py'
Nov 24 21:52:34 compute-0 sudo[166875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:35 compute-0 python3.9[166877]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:35 compute-0 sudo[166875]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:35 compute-0 sudo[167027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkrhzzxqlenyvjooydgnbaoqvfmlccdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021155.3318863-307-153582990380090/AnsiballZ_stat.py'
Nov 24 21:52:35 compute-0 sudo[167027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:35 compute-0 python3.9[167029]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:52:35 compute-0 sudo[167027]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:36 compute-0 sudo[167181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ludkwmtkbclwwsduimzzzqibabmtbgye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021156.1926293-315-133490223940492/AnsiballZ_file.py'
Nov 24 21:52:36 compute-0 sudo[167181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:36 compute-0 python3.9[167183]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:36 compute-0 sudo[167181]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:37 compute-0 sudo[167333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqsvadywqqoafvlcocwxwgugbttnuoef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021157.1492338-324-213514927544039/AnsiballZ_file.py'
Nov 24 21:52:37 compute-0 sudo[167333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:37 compute-0 python3.9[167335]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:52:37 compute-0 sudo[167333]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:38 compute-0 sudo[167485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhdbqcuozuktmmikgvutrzfeggbseztz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021157.9872117-332-244713736618231/AnsiballZ_stat.py'
Nov 24 21:52:38 compute-0 sudo[167485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:38 compute-0 python3.9[167487]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:52:38 compute-0 sudo[167485]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:38 compute-0 sudo[167563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugzojrymjvxjzzofvrwgvmfckrvrxrek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021157.9872117-332-244713736618231/AnsiballZ_file.py'
Nov 24 21:52:38 compute-0 sudo[167563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:39 compute-0 python3.9[167565]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:52:39 compute-0 sudo[167563]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:39 compute-0 sudo[167715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvgdpzoxebzfpphehgigkbdycxkpqqwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021159.3041506-332-5193058575262/AnsiballZ_stat.py'
Nov 24 21:52:39 compute-0 sudo[167715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:39 compute-0 python3.9[167717]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:52:39 compute-0 sudo[167715]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:40 compute-0 sudo[167793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmuusymhmzlpwzacsddhlzdoajgftnyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021159.3041506-332-5193058575262/AnsiballZ_file.py'
Nov 24 21:52:40 compute-0 sudo[167793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:40 compute-0 python3.9[167795]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:52:40 compute-0 sudo[167793]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:41 compute-0 sudo[167945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvhtkqrsixvwzjjbujrnmsjeoxbwxnis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021160.7638857-355-98657814070581/AnsiballZ_file.py'
Nov 24 21:52:41 compute-0 sudo[167945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:41 compute-0 python3.9[167947]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:41 compute-0 sudo[167945]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:41 compute-0 sudo[168097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaqqpafrvgpytxstkxnudvdzhqludazc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021161.540576-363-220918908540143/AnsiballZ_stat.py'
Nov 24 21:52:41 compute-0 sudo[168097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:42 compute-0 python3.9[168099]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:52:42 compute-0 sudo[168097]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:42 compute-0 sudo[168175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqmhcubrvvshnvzypazxadfwzucavfbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021161.540576-363-220918908540143/AnsiballZ_file.py'
Nov 24 21:52:42 compute-0 sudo[168175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:42 compute-0 python3.9[168177]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:42 compute-0 sudo[168175]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:43 compute-0 sudo[168327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvekmypoukwdrxmrjedakwmfnqjlfaev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021162.9442446-375-82696276168046/AnsiballZ_stat.py'
Nov 24 21:52:43 compute-0 sudo[168327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:43 compute-0 python3.9[168329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:52:43 compute-0 sudo[168327]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:43 compute-0 sudo[168405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzvzjfuoedmthvtfscrovxlryxfijadf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021162.9442446-375-82696276168046/AnsiballZ_file.py'
Nov 24 21:52:43 compute-0 sudo[168405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:44 compute-0 python3.9[168407]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:44 compute-0 sudo[168405]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:44 compute-0 sudo[168557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnxshidjmtxwokcyhibovvfiwxuvomuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021164.3880708-387-111120871138008/AnsiballZ_systemd.py'
Nov 24 21:52:44 compute-0 sudo[168557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:45 compute-0 python3.9[168559]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:52:45 compute-0 systemd[1]: Reloading.
Nov 24 21:52:45 compute-0 systemd-rc-local-generator[168587]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:52:45 compute-0 systemd-sysv-generator[168590]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:52:45 compute-0 sudo[168557]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:46 compute-0 sudo[168746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvgfcwbvruqqkvgunhfnlcdwbcumdxzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021165.735825-395-68831838962915/AnsiballZ_stat.py'
Nov 24 21:52:46 compute-0 sudo[168746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:46 compute-0 python3.9[168748]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:52:46 compute-0 sudo[168746]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:46 compute-0 sudo[168824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckwjdtyxhtjteewzbwekjhodwxdgkjnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021165.735825-395-68831838962915/AnsiballZ_file.py'
Nov 24 21:52:46 compute-0 sudo[168824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:46 compute-0 python3.9[168826]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:46 compute-0 sudo[168824]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:47 compute-0 sudo[168976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tveelooprwxsdxjwfzszkfbmgeeswywd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021167.1363318-407-175204197122087/AnsiballZ_stat.py'
Nov 24 21:52:47 compute-0 sudo[168976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:47 compute-0 python3.9[168978]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:52:47 compute-0 sudo[168976]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:48 compute-0 sudo[169054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmluzlmtvtbgnqtidyloewgdwmweusjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021167.1363318-407-175204197122087/AnsiballZ_file.py'
Nov 24 21:52:48 compute-0 sudo[169054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:48 compute-0 python3.9[169056]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:48 compute-0 sudo[169054]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:48 compute-0 sudo[169206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfyfurazhxjrmhahvuwlkncbfypsdvxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021168.5986474-419-107857812374624/AnsiballZ_systemd.py'
Nov 24 21:52:48 compute-0 sudo[169206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:49 compute-0 python3.9[169208]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:52:49 compute-0 systemd[1]: Reloading.
Nov 24 21:52:49 compute-0 systemd-rc-local-generator[169236]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:52:49 compute-0 systemd-sysv-generator[169239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:52:49 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 21:52:49 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 21:52:49 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 21:52:49 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 21:52:49 compute-0 sudo[169206]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:50 compute-0 sudo[169400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmoahrqcvweygyxbfhuxwwztqxmhsdyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021170.0027633-429-40517053176155/AnsiballZ_file.py'
Nov 24 21:52:50 compute-0 sudo[169400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:50 compute-0 python3.9[169402]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:52:50 compute-0 sudo[169400]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:51 compute-0 sudo[169552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qagklypdzhgyjzyqjrbkjvpixohdpyma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021170.7957606-437-122477124205189/AnsiballZ_stat.py'
Nov 24 21:52:51 compute-0 sudo[169552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:51 compute-0 python3.9[169554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:52:51 compute-0 sudo[169552]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:51 compute-0 sudo[169675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfqsozhketclswdtfoivhoysrkqveypo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021170.7957606-437-122477124205189/AnsiballZ_copy.py'
Nov 24 21:52:51 compute-0 sudo[169675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:52 compute-0 python3.9[169677]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021170.7957606-437-122477124205189/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:52:52 compute-0 sudo[169675]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:52 compute-0 sudo[169827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jagjlvfnquvqeuruoiepqeghgpcxpkks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021172.5095274-454-275379973357170/AnsiballZ_file.py'
Nov 24 21:52:52 compute-0 sudo[169827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:53 compute-0 python3.9[169829]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:52:53 compute-0 sudo[169827]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:53 compute-0 sudo[169979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frklndkzyrrechinemhalnaywdoixnyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021173.3987675-462-242951960513295/AnsiballZ_stat.py'
Nov 24 21:52:53 compute-0 sudo[169979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:53 compute-0 python3.9[169981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:52:54 compute-0 sudo[169979]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:54 compute-0 sudo[170102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzrkyyjmwehhvgaxigihhtvcehlerooh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021173.3987675-462-242951960513295/AnsiballZ_copy.py'
Nov 24 21:52:54 compute-0 sudo[170102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:52:54.542 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:52:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:52:54.543 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:52:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:52:54.544 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:52:54 compute-0 python3.9[170104]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021173.3987675-462-242951960513295/.source.json _original_basename=.c_wr4fkw follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:54 compute-0 sudo[170102]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:55 compute-0 sudo[170254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsuumqnsfpzboddfvxndqvfzekrdqmdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021174.9032514-477-19585787419877/AnsiballZ_file.py'
Nov 24 21:52:55 compute-0 sudo[170254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:55 compute-0 python3.9[170256]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:52:55 compute-0 sudo[170254]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:56 compute-0 sudo[170406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksjiwgmzombyjazzxlrrsekwvpshrwqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021175.757763-485-244188989625547/AnsiballZ_stat.py'
Nov 24 21:52:56 compute-0 sudo[170406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:56 compute-0 sudo[170406]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:56 compute-0 sudo[170529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vupbdxhjxwpehfsgsekggipsiusrudso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021175.757763-485-244188989625547/AnsiballZ_copy.py'
Nov 24 21:52:56 compute-0 sudo[170529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:57 compute-0 sudo[170529]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:58 compute-0 sudo[170693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsnhfupcqrpghtiayjwowtqoyimuaigk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021177.528199-502-167912662588531/AnsiballZ_container_config_data.py'
Nov 24 21:52:58 compute-0 sudo[170693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:58 compute-0 podman[170655]: 2025-11-24 21:52:58.196270865 +0000 UTC m=+0.157314667 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 24 21:52:58 compute-0 python3.9[170700]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 24 21:52:58 compute-0 sudo[170693]: pam_unix(sudo:session): session closed for user root
Nov 24 21:52:59 compute-0 podman[170833]: 2025-11-24 21:52:59.138064224 +0000 UTC m=+0.054815960 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 21:52:59 compute-0 sudo[170878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsrrmguizqxncpinvsmpayrmrrecmdqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021178.6386533-511-269615124589688/AnsiballZ_container_config_hash.py'
Nov 24 21:52:59 compute-0 sudo[170878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:52:59 compute-0 python3.9[170880]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 21:52:59 compute-0 sudo[170878]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:00 compute-0 sudo[171030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amleefzusihlnmiihcsuucbwenjxjvuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021179.6531563-520-33703508655654/AnsiballZ_podman_container_info.py'
Nov 24 21:53:00 compute-0 sudo[171030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:00 compute-0 python3.9[171032]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 21:53:00 compute-0 sudo[171030]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:01 compute-0 sudo[171208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaoeiwxgxbbbuotwntngmlaerfrkljsi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021181.2767773-533-260202707735060/AnsiballZ_edpm_container_manage.py'
Nov 24 21:53:01 compute-0 sudo[171208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:02 compute-0 python3[171210]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 21:53:02 compute-0 podman[171247]: 2025-11-24 21:53:02.429897477 +0000 UTC m=+0.069296356 container create 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:53:02 compute-0 podman[171247]: 2025-11-24 21:53:02.391055171 +0000 UTC m=+0.030454090 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 21:53:02 compute-0 python3[171210]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 21:53:02 compute-0 sudo[171208]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:03 compute-0 sudo[171435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyjgthdzkoxzxnwootdjxssunpeskqzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021183.00775-541-35159125819417/AnsiballZ_stat.py'
Nov 24 21:53:03 compute-0 sudo[171435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:03 compute-0 python3.9[171437]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:53:03 compute-0 sudo[171435]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:04 compute-0 sudo[171589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvqxfcrzxgvzqgnfvypirrcpjhzgfznm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021183.97873-550-243577950657034/AnsiballZ_file.py'
Nov 24 21:53:04 compute-0 sudo[171589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:04 compute-0 python3.9[171591]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:04 compute-0 sudo[171589]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:04 compute-0 sudo[171665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzifhwqiedgmqkavfofwnwobhmwertah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021183.97873-550-243577950657034/AnsiballZ_stat.py'
Nov 24 21:53:04 compute-0 sudo[171665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:05 compute-0 python3.9[171667]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:53:05 compute-0 sudo[171665]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:05 compute-0 sudo[171816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlgabektqjyfbmulifvtbnjbpabnddbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021185.1101966-550-124814444872816/AnsiballZ_copy.py'
Nov 24 21:53:05 compute-0 sudo[171816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:05 compute-0 python3.9[171818]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764021185.1101966-550-124814444872816/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:05 compute-0 sudo[171816]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:06 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 24 21:53:06 compute-0 sudo[171893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmyvtbpkowuudbuidsjunrhnxfrjuebh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021185.1101966-550-124814444872816/AnsiballZ_systemd.py'
Nov 24 21:53:06 compute-0 sudo[171893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:06 compute-0 python3.9[171895]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:53:06 compute-0 systemd[1]: Reloading.
Nov 24 21:53:06 compute-0 systemd-rc-local-generator[171922]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:53:06 compute-0 systemd-sysv-generator[171926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:53:07 compute-0 sudo[171893]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:07 compute-0 sudo[172003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhwybsnishcgckxhmocywphhogvihroz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021185.1101966-550-124814444872816/AnsiballZ_systemd.py'
Nov 24 21:53:07 compute-0 sudo[172003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:07 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 24 21:53:07 compute-0 python3.9[172005]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:53:08 compute-0 systemd[1]: Reloading.
Nov 24 21:53:08 compute-0 systemd-rc-local-generator[172033]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:53:08 compute-0 systemd-sysv-generator[172039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:53:09 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 24 21:53:09 compute-0 systemd[1]: Starting multipathd container...
Nov 24 21:53:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89c4c762828b51257e8a1f7dfb6fda74cf47a264ec84995b5f9eff9d60568d6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 21:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89c4c762828b51257e8a1f7dfb6fda74cf47a264ec84995b5f9eff9d60568d6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 21:53:09 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe.
Nov 24 21:53:09 compute-0 podman[172046]: 2025-11-24 21:53:09.30793621 +0000 UTC m=+0.156483711 container init 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 21:53:09 compute-0 multipathd[172062]: + sudo -E kolla_set_configs
Nov 24 21:53:09 compute-0 podman[172046]: 2025-11-24 21:53:09.348189829 +0000 UTC m=+0.196737350 container start 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 24 21:53:09 compute-0 podman[172046]: multipathd
Nov 24 21:53:09 compute-0 sudo[172068]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 21:53:09 compute-0 systemd[1]: Started multipathd container.
Nov 24 21:53:09 compute-0 sudo[172068]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 21:53:09 compute-0 sudo[172068]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 21:53:09 compute-0 sudo[172003]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:09 compute-0 multipathd[172062]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 21:53:09 compute-0 multipathd[172062]: INFO:__main__:Validating config file
Nov 24 21:53:09 compute-0 multipathd[172062]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 21:53:09 compute-0 multipathd[172062]: INFO:__main__:Writing out command to execute
Nov 24 21:53:09 compute-0 podman[172069]: 2025-11-24 21:53:09.438634075 +0000 UTC m=+0.080978785 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 21:53:09 compute-0 sudo[172068]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:09 compute-0 multipathd[172062]: ++ cat /run_command
Nov 24 21:53:09 compute-0 systemd[1]: 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe-795b9889ee30655e.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 21:53:09 compute-0 systemd[1]: 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe-795b9889ee30655e.service: Failed with result 'exit-code'.
Nov 24 21:53:09 compute-0 multipathd[172062]: + CMD='/usr/sbin/multipathd -d'
Nov 24 21:53:09 compute-0 multipathd[172062]: + ARGS=
Nov 24 21:53:09 compute-0 multipathd[172062]: + sudo kolla_copy_cacerts
Nov 24 21:53:09 compute-0 sudo[172099]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 21:53:09 compute-0 sudo[172099]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 21:53:09 compute-0 sudo[172099]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 21:53:09 compute-0 sudo[172099]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:09 compute-0 multipathd[172062]: + [[ ! -n '' ]]
Nov 24 21:53:09 compute-0 multipathd[172062]: + . kolla_extend_start
Nov 24 21:53:09 compute-0 multipathd[172062]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 21:53:09 compute-0 multipathd[172062]: Running command: '/usr/sbin/multipathd -d'
Nov 24 21:53:09 compute-0 multipathd[172062]: + umask 0022
Nov 24 21:53:09 compute-0 multipathd[172062]: + exec /usr/sbin/multipathd -d
Nov 24 21:53:09 compute-0 multipathd[172062]: 3037.045626 | --------start up--------
Nov 24 21:53:09 compute-0 multipathd[172062]: 3037.045767 | read /etc/multipath.conf
Nov 24 21:53:09 compute-0 multipathd[172062]: 3037.054187 | path checkers start up
Nov 24 21:53:10 compute-0 python3.9[172252]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:53:10 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 21:53:10 compute-0 sudo[172405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubjoqjaqagxgbnfwnzyngipiajocnagd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021190.6163788-586-256092556114717/AnsiballZ_command.py'
Nov 24 21:53:10 compute-0 sudo[172405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:11 compute-0 python3.9[172407]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:53:11 compute-0 sudo[172405]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:11 compute-0 sudo[172569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beqlfephwrytsxjumaqisrpvjahoaoap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021191.4499342-594-181438154918850/AnsiballZ_systemd.py'
Nov 24 21:53:11 compute-0 sudo[172569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:12 compute-0 python3.9[172571]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:53:12 compute-0 systemd[1]: Stopping multipathd container...
Nov 24 21:53:12 compute-0 multipathd[172062]: 3039.771505 | exit (signal)
Nov 24 21:53:12 compute-0 multipathd[172062]: 3039.771568 | --------shut down-------
Nov 24 21:53:12 compute-0 systemd[1]: libpod-5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe.scope: Deactivated successfully.
Nov 24 21:53:12 compute-0 podman[172575]: 2025-11-24 21:53:12.268145788 +0000 UTC m=+0.095750660 container died 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 21:53:12 compute-0 systemd[1]: 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe-795b9889ee30655e.timer: Deactivated successfully.
Nov 24 21:53:12 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe.
Nov 24 21:53:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe-userdata-shm.mount: Deactivated successfully.
Nov 24 21:53:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d89c4c762828b51257e8a1f7dfb6fda74cf47a264ec84995b5f9eff9d60568d6-merged.mount: Deactivated successfully.
Nov 24 21:53:12 compute-0 podman[172575]: 2025-11-24 21:53:12.327567238 +0000 UTC m=+0.155172120 container cleanup 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 24 21:53:12 compute-0 podman[172575]: multipathd
Nov 24 21:53:12 compute-0 podman[172605]: multipathd
Nov 24 21:53:12 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 24 21:53:12 compute-0 systemd[1]: Stopped multipathd container.
Nov 24 21:53:12 compute-0 systemd[1]: Starting multipathd container...
Nov 24 21:53:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89c4c762828b51257e8a1f7dfb6fda74cf47a264ec84995b5f9eff9d60568d6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 21:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89c4c762828b51257e8a1f7dfb6fda74cf47a264ec84995b5f9eff9d60568d6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 21:53:12 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe.
Nov 24 21:53:12 compute-0 podman[172618]: 2025-11-24 21:53:12.626739853 +0000 UTC m=+0.165825758 container init 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 21:53:12 compute-0 multipathd[172633]: + sudo -E kolla_set_configs
Nov 24 21:53:12 compute-0 podman[172618]: 2025-11-24 21:53:12.661896737 +0000 UTC m=+0.200982652 container start 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 21:53:12 compute-0 podman[172618]: multipathd
Nov 24 21:53:12 compute-0 sudo[172639]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 21:53:12 compute-0 sudo[172639]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 21:53:12 compute-0 sudo[172639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 21:53:12 compute-0 systemd[1]: Started multipathd container.
Nov 24 21:53:12 compute-0 sudo[172569]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:12 compute-0 multipathd[172633]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 21:53:12 compute-0 multipathd[172633]: INFO:__main__:Validating config file
Nov 24 21:53:12 compute-0 multipathd[172633]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 21:53:12 compute-0 multipathd[172633]: INFO:__main__:Writing out command to execute
Nov 24 21:53:12 compute-0 sudo[172639]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:12 compute-0 multipathd[172633]: ++ cat /run_command
Nov 24 21:53:12 compute-0 multipathd[172633]: + CMD='/usr/sbin/multipathd -d'
Nov 24 21:53:12 compute-0 multipathd[172633]: + ARGS=
Nov 24 21:53:12 compute-0 multipathd[172633]: + sudo kolla_copy_cacerts
Nov 24 21:53:12 compute-0 sudo[172667]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 21:53:12 compute-0 sudo[172667]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 21:53:12 compute-0 sudo[172667]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 21:53:12 compute-0 sudo[172667]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:12 compute-0 podman[172640]: 2025-11-24 21:53:12.778463587 +0000 UTC m=+0.095935927 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 21:53:12 compute-0 multipathd[172633]: + [[ ! -n '' ]]
Nov 24 21:53:12 compute-0 multipathd[172633]: + . kolla_extend_start
Nov 24 21:53:12 compute-0 multipathd[172633]: Running command: '/usr/sbin/multipathd -d'
Nov 24 21:53:12 compute-0 multipathd[172633]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 21:53:12 compute-0 multipathd[172633]: + umask 0022
Nov 24 21:53:12 compute-0 multipathd[172633]: + exec /usr/sbin/multipathd -d
Nov 24 21:53:12 compute-0 systemd[1]: 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe-40b8878497290c62.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 21:53:12 compute-0 systemd[1]: 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe-40b8878497290c62.service: Failed with result 'exit-code'.
Nov 24 21:53:12 compute-0 multipathd[172633]: 3040.352709 | --------start up--------
Nov 24 21:53:12 compute-0 multipathd[172633]: 3040.352735 | read /etc/multipath.conf
Nov 24 21:53:12 compute-0 multipathd[172633]: 3040.360245 | path checkers start up
Nov 24 21:53:13 compute-0 sudo[172823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pimguvnoycdaudtajlttpatdajuagcib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021192.9616961-602-218805028534915/AnsiballZ_file.py'
Nov 24 21:53:13 compute-0 sudo[172823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:13 compute-0 python3.9[172825]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:13 compute-0 sudo[172823]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:14 compute-0 sudo[172975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvcbtkagycutsnvlxoniiuvaicjphlel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021194.2050383-614-191932404791974/AnsiballZ_file.py'
Nov 24 21:53:14 compute-0 sudo[172975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:14 compute-0 python3.9[172977]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 21:53:14 compute-0 sudo[172975]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:15 compute-0 sudo[173127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amlreycxqnoycgzvtverrfencygvrugn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021195.0347283-622-156492306189508/AnsiballZ_modprobe.py'
Nov 24 21:53:15 compute-0 sudo[173127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:15 compute-0 python3.9[173129]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 24 21:53:15 compute-0 kernel: Key type psk registered
Nov 24 21:53:15 compute-0 sudo[173127]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:16 compute-0 sudo[173289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kabycbkzhfhxtagtnyfeffulrkbbzkqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021195.9697871-630-247960424098105/AnsiballZ_stat.py'
Nov 24 21:53:16 compute-0 sudo[173289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:16 compute-0 python3.9[173291]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:53:16 compute-0 sudo[173289]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:16 compute-0 sudo[173412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmumzvxoehudrfmoeftlgxjgxakxhxyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021195.9697871-630-247960424098105/AnsiballZ_copy.py'
Nov 24 21:53:16 compute-0 sudo[173412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:17 compute-0 python3.9[173414]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021195.9697871-630-247960424098105/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:17 compute-0 sudo[173412]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:17 compute-0 sudo[173564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyjsmsbbbiewelsqfmvhsztcwmacsbxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021197.3900654-646-97174597937326/AnsiballZ_lineinfile.py'
Nov 24 21:53:17 compute-0 sudo[173564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:17 compute-0 python3.9[173566]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:17 compute-0 sudo[173564]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:18 compute-0 sudo[173716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goynrzgsrwthcavryulmqkzqlvogqtnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021198.1924124-654-98667193426479/AnsiballZ_systemd.py'
Nov 24 21:53:18 compute-0 sudo[173716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:18 compute-0 python3.9[173718]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:53:18 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 21:53:18 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 24 21:53:18 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 24 21:53:18 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 24 21:53:19 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 24 21:53:19 compute-0 sudo[173716]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:19 compute-0 sudo[173872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrimmxluwisydohabqhnblvfdhjawxnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021199.4203076-662-224429783341159/AnsiballZ_dnf.py'
Nov 24 21:53:19 compute-0 sudo[173872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:20 compute-0 python3.9[173874]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 21:53:22 compute-0 systemd[1]: Reloading.
Nov 24 21:53:22 compute-0 systemd-sysv-generator[173907]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:53:22 compute-0 systemd-rc-local-generator[173903]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:53:22 compute-0 systemd[1]: Reloading.
Nov 24 21:53:22 compute-0 systemd-sysv-generator[173945]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:53:22 compute-0 systemd-rc-local-generator[173938]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:53:23 compute-0 systemd-logind[806]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 21:53:23 compute-0 systemd-logind[806]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 21:53:23 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 21:53:23 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 21:53:23 compute-0 systemd[1]: Reloading.
Nov 24 21:53:23 compute-0 systemd-rc-local-generator[174035]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:53:23 compute-0 systemd-sysv-generator[174040]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:53:23 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 21:53:24 compute-0 sudo[173872]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:24 compute-0 sudo[175249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bawvafnexgxsalhunnhaqyjlxgpjoqoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021204.6052063-670-200932892473066/AnsiballZ_systemd_service.py'
Nov 24 21:53:24 compute-0 sudo[175249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 21:53:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 21:53:25 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.988s CPU time.
Nov 24 21:53:25 compute-0 systemd[1]: run-rc91da4472df5445dbe926206099e64da.service: Deactivated successfully.
Nov 24 21:53:25 compute-0 python3.9[175271]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:53:25 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 24 21:53:25 compute-0 iscsid[163696]: iscsid shutting down.
Nov 24 21:53:25 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 24 21:53:25 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 24 21:53:25 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 21:53:25 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 24 21:53:25 compute-0 systemd[1]: Started Open-iSCSI.
Nov 24 21:53:25 compute-0 sudo[175249]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:25 compute-0 sshd-session[175341]: Invalid user ubuntu from 193.32.162.145 port 57636
Nov 24 21:53:26 compute-0 sshd-session[175341]: Connection closed by invalid user ubuntu 193.32.162.145 port 57636 [preauth]
Nov 24 21:53:26 compute-0 python3.9[175484]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:53:27 compute-0 sudo[175638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wplhmkybfceharrluypdtxnudusqtlie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021206.874135-688-254946207568328/AnsiballZ_file.py'
Nov 24 21:53:27 compute-0 sudo[175638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:27 compute-0 python3.9[175640]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:27 compute-0 sudo[175638]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:28 compute-0 sudo[175790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fviyqymempgbebyrmgdzhbanqtmqswwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021207.84226-699-130622356471423/AnsiballZ_systemd_service.py'
Nov 24 21:53:28 compute-0 sudo[175790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:28 compute-0 podman[175792]: 2025-11-24 21:53:28.386303189 +0000 UTC m=+0.136856086 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 24 21:53:28 compute-0 python3.9[175793]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:53:28 compute-0 systemd[1]: Reloading.
Nov 24 21:53:28 compute-0 systemd-rc-local-generator[175846]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:53:28 compute-0 systemd-sysv-generator[175850]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:53:28 compute-0 sudo[175790]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:29 compute-0 podman[175942]: 2025-11-24 21:53:29.555813611 +0000 UTC m=+0.092960793 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 21:53:29 compute-0 python3.9[176022]: ansible-ansible.builtin.service_facts Invoked
Nov 24 21:53:29 compute-0 network[176039]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 21:53:29 compute-0 network[176040]: 'network-scripts' will be removed from distribution in near future.
Nov 24 21:53:29 compute-0 network[176041]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 21:53:36 compute-0 sudo[176313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvrntyljagyvdywxnrgpimrqxkbwxlpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021216.2887762-718-208762517224531/AnsiballZ_systemd_service.py'
Nov 24 21:53:36 compute-0 sudo[176313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:37 compute-0 python3.9[176315]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:53:38 compute-0 sudo[176313]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:38 compute-0 sudo[176466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtojzaljicifhdhkwjtcvkucdhvhjpig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021218.2500238-718-54850096511596/AnsiballZ_systemd_service.py'
Nov 24 21:53:38 compute-0 sudo[176466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:38 compute-0 python3.9[176468]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:53:39 compute-0 sudo[176466]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:39 compute-0 sudo[176619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiasxmimutcpgojktnghiosesfztmuru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021219.193657-718-22547852550262/AnsiballZ_systemd_service.py'
Nov 24 21:53:39 compute-0 sudo[176619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:39 compute-0 python3.9[176621]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:53:39 compute-0 sudo[176619]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:40 compute-0 sudo[176772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihdfulqzqvkwjdtojoasizwhxfhyqdds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021220.1317708-718-34042171013097/AnsiballZ_systemd_service.py'
Nov 24 21:53:40 compute-0 sudo[176772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:40 compute-0 python3.9[176774]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:53:40 compute-0 sudo[176772]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:41 compute-0 sudo[176925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlcqrxbszgfrpmfkoysgulgmjeigimgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021220.9726923-718-267965142425011/AnsiballZ_systemd_service.py'
Nov 24 21:53:41 compute-0 sudo[176925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:41 compute-0 python3.9[176927]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:53:41 compute-0 sudo[176925]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:42 compute-0 sudo[177078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpdcsyrifzhthhklewwazpvvrvcxhjlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021222.0785408-718-141166272599911/AnsiballZ_systemd_service.py'
Nov 24 21:53:42 compute-0 sudo[177078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:42 compute-0 python3.9[177080]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:53:42 compute-0 sudo[177078]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:43 compute-0 sudo[177244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vatkutasbkprtwwewmuahhruugkczidp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021222.9114313-718-116368849305675/AnsiballZ_systemd_service.py'
Nov 24 21:53:43 compute-0 sudo[177244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:43 compute-0 podman[177205]: 2025-11-24 21:53:43.377694325 +0000 UTC m=+0.092140124 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:53:43 compute-0 python3.9[177252]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:53:43 compute-0 sudo[177244]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:44 compute-0 sudo[177404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxksnawlcrweitjsiafdktnamqguzzpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021223.8429492-718-104345575988203/AnsiballZ_systemd_service.py'
Nov 24 21:53:44 compute-0 sudo[177404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:44 compute-0 python3.9[177406]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:53:44 compute-0 sudo[177404]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:45 compute-0 sudo[177557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlcgmwqytijetnqcjosemdrwmntrnzlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021224.9452362-777-17070755791164/AnsiballZ_file.py'
Nov 24 21:53:45 compute-0 sudo[177557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:45 compute-0 python3.9[177559]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:45 compute-0 sudo[177557]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:46 compute-0 sudo[177709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhehitgtopoqzoczxlibzzjcojntizvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021225.673432-777-113704151742287/AnsiballZ_file.py'
Nov 24 21:53:46 compute-0 sudo[177709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:46 compute-0 python3.9[177711]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:46 compute-0 sudo[177709]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:46 compute-0 sudo[177861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpqtuymrptetiqottztecpzxfccjhjzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021226.4467976-777-273777216753463/AnsiballZ_file.py'
Nov 24 21:53:46 compute-0 sudo[177861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:47 compute-0 python3.9[177863]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:47 compute-0 sudo[177861]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:47 compute-0 sudo[178013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xociqcjmxneqduoxwthcevlqznlwronj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021227.2679284-777-199777504528198/AnsiballZ_file.py'
Nov 24 21:53:47 compute-0 sudo[178013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:47 compute-0 python3.9[178015]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:47 compute-0 sudo[178013]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:48 compute-0 sudo[178165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twjbsvqidrfmvsceddpcxkgnojpqosmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021228.0099187-777-219717472718768/AnsiballZ_file.py'
Nov 24 21:53:48 compute-0 sudo[178165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:48 compute-0 python3.9[178167]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:48 compute-0 sudo[178165]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:49 compute-0 sudo[178317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnyvelibtplobadsnpyirvhmtjsllbdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021228.749761-777-41159167301671/AnsiballZ_file.py'
Nov 24 21:53:49 compute-0 sudo[178317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:49 compute-0 python3.9[178319]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:49 compute-0 sudo[178317]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:49 compute-0 sudo[178469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uulqwwnvgjthylmwmfssotbiwzhvpdjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021229.5391061-777-65876450825142/AnsiballZ_file.py'
Nov 24 21:53:49 compute-0 sudo[178469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:50 compute-0 python3.9[178471]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:50 compute-0 sudo[178469]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:50 compute-0 sudo[178621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovspeoaxqecitwhvupvomdwtupljymff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021230.3388858-777-198721635679688/AnsiballZ_file.py'
Nov 24 21:53:50 compute-0 sudo[178621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:50 compute-0 python3.9[178623]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:50 compute-0 sudo[178621]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:51 compute-0 sudo[178773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgvptivhhusorxwfhawrwluryoorzlbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021231.0981731-834-247619483065519/AnsiballZ_file.py'
Nov 24 21:53:51 compute-0 sudo[178773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:51 compute-0 python3.9[178775]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:51 compute-0 sudo[178773]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:52 compute-0 sudo[178925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojpayyzivmqsxtckqbezjhooqhfnhmmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021231.958213-834-7354573608215/AnsiballZ_file.py'
Nov 24 21:53:52 compute-0 sudo[178925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:52 compute-0 python3.9[178927]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:52 compute-0 sudo[178925]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:53 compute-0 sudo[179077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhxbjujsbpfyhxcujwdnwxmtujjgyzjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021232.786055-834-49124091847374/AnsiballZ_file.py'
Nov 24 21:53:53 compute-0 sudo[179077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:53 compute-0 python3.9[179079]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:53 compute-0 sudo[179077]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:53 compute-0 sudo[179229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oenppssbscsocoealobpkpepkqiynkjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021233.4855406-834-239300296186497/AnsiballZ_file.py'
Nov 24 21:53:53 compute-0 sudo[179229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:54 compute-0 python3.9[179231]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:54 compute-0 sudo[179229]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:53:54.543 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:53:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:53:54.543 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:53:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:53:54.543 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:53:54 compute-0 sudo[179381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wryclsgptqiwdddljsdjirmwbesybdll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021234.293224-834-90498609843205/AnsiballZ_file.py'
Nov 24 21:53:54 compute-0 sudo[179381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:54 compute-0 python3.9[179383]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:54 compute-0 sudo[179381]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:55 compute-0 sudo[179533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqaylonzvbmtwbglzikkgrumyedfdbxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021235.0526628-834-179181539234059/AnsiballZ_file.py'
Nov 24 21:53:55 compute-0 sudo[179533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:55 compute-0 python3.9[179535]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:55 compute-0 sudo[179533]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:56 compute-0 sudo[179685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzfbguxmaubojjgotfhfcpvukaxtsgct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021235.8382492-834-159914140942240/AnsiballZ_file.py'
Nov 24 21:53:56 compute-0 sudo[179685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:56 compute-0 python3.9[179687]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:56 compute-0 sudo[179685]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:56 compute-0 sudo[179837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bixaljddraeptkcqalazakdnjqghfops ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021236.6284409-834-124793214287406/AnsiballZ_file.py'
Nov 24 21:53:56 compute-0 sudo[179837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:57 compute-0 python3.9[179839]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:53:57 compute-0 sudo[179837]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:57 compute-0 sudo[179989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjkieigbtosweeydrkbktuboldjwqxqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021237.5114262-892-261111220506747/AnsiballZ_command.py'
Nov 24 21:53:57 compute-0 sudo[179989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:58 compute-0 python3.9[179991]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:53:58 compute-0 sudo[179989]: pam_unix(sudo:session): session closed for user root
Nov 24 21:53:58 compute-0 podman[180046]: 2025-11-24 21:53:58.574279915 +0000 UTC m=+0.129204227 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 21:53:59 compute-0 python3.9[180169]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 21:53:59 compute-0 sudo[180330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsenonvmknbqdeddpcqdozhwfyhafgqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021239.4119887-910-23151216921379/AnsiballZ_systemd_service.py'
Nov 24 21:53:59 compute-0 sudo[180330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:53:59 compute-0 podman[180293]: 2025-11-24 21:53:59.78300936 +0000 UTC m=+0.071460413 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 24 21:54:00 compute-0 python3.9[180338]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:54:00 compute-0 systemd[1]: Reloading.
Nov 24 21:54:00 compute-0 systemd-rc-local-generator[180369]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:54:00 compute-0 systemd-sysv-generator[180372]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:54:00 compute-0 sudo[180330]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:01 compute-0 sudo[180523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qycoejsfmkwymekiizdritqvxvhdxnvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021240.8363366-918-139161151731472/AnsiballZ_command.py'
Nov 24 21:54:01 compute-0 sudo[180523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:01 compute-0 python3.9[180525]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:54:01 compute-0 sudo[180523]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:02 compute-0 sudo[180676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkgheqhzmkorcrdexazfrmycfdnlfhlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021241.7582304-918-82461943438310/AnsiballZ_command.py'
Nov 24 21:54:02 compute-0 sudo[180676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:02 compute-0 python3.9[180678]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:54:02 compute-0 sudo[180676]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:02 compute-0 sudo[180829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrdewexjagkjlaaihcdwbptskybfrpld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021242.5705397-918-39809458155925/AnsiballZ_command.py'
Nov 24 21:54:02 compute-0 sudo[180829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:03 compute-0 python3.9[180831]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:54:03 compute-0 sudo[180829]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:03 compute-0 sudo[180982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaaysewrpntbaqxzcklcpliubbpmbnfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021243.3117192-918-132134187027538/AnsiballZ_command.py'
Nov 24 21:54:03 compute-0 sudo[180982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:03 compute-0 python3.9[180984]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:54:03 compute-0 sudo[180982]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:04 compute-0 sudo[181135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydenkgwomkwbgtqzkfslsuwmgbofeooj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021244.1066763-918-176229138431870/AnsiballZ_command.py'
Nov 24 21:54:04 compute-0 sudo[181135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:04 compute-0 python3.9[181137]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:54:04 compute-0 sudo[181135]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:05 compute-0 sudo[181288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtukfstrfkxixlsomuokzczfiykrlzff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021244.9501975-918-86439146656800/AnsiballZ_command.py'
Nov 24 21:54:05 compute-0 sudo[181288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:05 compute-0 python3.9[181290]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:54:05 compute-0 sudo[181288]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:06 compute-0 sudo[181441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbxkteoqcrorssfgabizkpbmxrvdpmor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021245.755319-918-184131961383484/AnsiballZ_command.py'
Nov 24 21:54:06 compute-0 sudo[181441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:06 compute-0 python3.9[181443]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:54:07 compute-0 sudo[181441]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:07 compute-0 sudo[181594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glvfudnmkxllltgcghtzrchodpzlawek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021247.5686195-918-142623193565241/AnsiballZ_command.py'
Nov 24 21:54:07 compute-0 sudo[181594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:08 compute-0 python3.9[181596]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:54:08 compute-0 sudo[181594]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:09 compute-0 sudo[181747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aundicnlffigxbqomhgznlpapfjboyil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021249.4516945-997-253096379425/AnsiballZ_file.py'
Nov 24 21:54:09 compute-0 sudo[181747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:10 compute-0 python3.9[181749]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:10 compute-0 sudo[181747]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:10 compute-0 sudo[181899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcamtyyjiujjiwexsluphubidcwemlfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021250.3086767-997-177910571710171/AnsiballZ_file.py'
Nov 24 21:54:10 compute-0 sudo[181899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:10 compute-0 python3.9[181901]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:10 compute-0 sudo[181899]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:11 compute-0 sudo[182051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mywklumgqldzcrbhnrondeahnqxotfry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021251.0409412-997-84373119896287/AnsiballZ_file.py'
Nov 24 21:54:11 compute-0 sudo[182051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:11 compute-0 python3.9[182053]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:11 compute-0 sudo[182051]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:12 compute-0 sudo[182203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kepmouotfnrafgrypjlzhenpgbgklvub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021251.885626-1019-138623001645315/AnsiballZ_file.py'
Nov 24 21:54:12 compute-0 sudo[182203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:12 compute-0 python3.9[182205]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:12 compute-0 sudo[182203]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:13 compute-0 sudo[182355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frbmynndqrsfqbbcdftwzhedxwnrmzae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021252.6685834-1019-5146482574163/AnsiballZ_file.py'
Nov 24 21:54:13 compute-0 sudo[182355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:13 compute-0 python3.9[182357]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:13 compute-0 sudo[182355]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:13 compute-0 podman[182382]: 2025-11-24 21:54:13.549192435 +0000 UTC m=+0.100087532 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:54:13 compute-0 sudo[182528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfsnmkcvqrebwdlaaxvwsgihgjrdetpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021253.4876919-1019-265921941643074/AnsiballZ_file.py'
Nov 24 21:54:13 compute-0 sudo[182528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:14 compute-0 python3.9[182530]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:14 compute-0 sudo[182528]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:14 compute-0 sudo[182680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqximujvcwkatgapchbwkqhzocewgjmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021254.3220966-1019-135204048370903/AnsiballZ_file.py'
Nov 24 21:54:14 compute-0 sudo[182680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:14 compute-0 python3.9[182682]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:14 compute-0 sudo[182680]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:15 compute-0 sudo[182832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vypqpreutqikpoaxviiisjsndqxeochr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021255.1619394-1019-39574773680322/AnsiballZ_file.py'
Nov 24 21:54:15 compute-0 sudo[182832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:15 compute-0 python3.9[182834]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:15 compute-0 sudo[182832]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:16 compute-0 sudo[182984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaycngrlxrnuypgvobszoukwjvcpnnry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021255.9185534-1019-11880953088336/AnsiballZ_file.py'
Nov 24 21:54:16 compute-0 sudo[182984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:16 compute-0 python3.9[182986]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:16 compute-0 sudo[182984]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:17 compute-0 sudo[183136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijyrgdceninvglfhvxrtpktpllraudtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021256.682682-1019-273360532903107/AnsiballZ_file.py'
Nov 24 21:54:17 compute-0 sudo[183136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:17 compute-0 python3.9[183138]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:17 compute-0 sudo[183136]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:22 compute-0 sudo[183288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejycgnponnpvpmxkzthzaryrxbcrpktz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021261.7199852-1188-146565666721660/AnsiballZ_getent.py'
Nov 24 21:54:22 compute-0 sudo[183288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:22 compute-0 python3.9[183290]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 24 21:54:22 compute-0 sudo[183288]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:23 compute-0 sudo[183441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsirigktqffmyhbrekclbcmawrtlibjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021262.7235239-1196-197102763964478/AnsiballZ_group.py'
Nov 24 21:54:23 compute-0 sudo[183441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:23 compute-0 python3.9[183443]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 21:54:23 compute-0 groupadd[183444]: group added to /etc/group: name=nova, GID=42436
Nov 24 21:54:23 compute-0 groupadd[183444]: group added to /etc/gshadow: name=nova
Nov 24 21:54:23 compute-0 groupadd[183444]: new group: name=nova, GID=42436
Nov 24 21:54:23 compute-0 sudo[183441]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:24 compute-0 sudo[183599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyodfxniokhbsrlrrawrbjwctzwznvkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021263.7249062-1204-267696119165214/AnsiballZ_user.py'
Nov 24 21:54:24 compute-0 sudo[183599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:24 compute-0 python3.9[183601]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 21:54:24 compute-0 useradd[183603]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 24 21:54:24 compute-0 useradd[183603]: add 'nova' to group 'libvirt'
Nov 24 21:54:24 compute-0 useradd[183603]: add 'nova' to shadow group 'libvirt'
Nov 24 21:54:24 compute-0 sudo[183599]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:25 compute-0 sshd-session[183634]: Accepted publickey for zuul from 192.168.122.30 port 50142 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:54:25 compute-0 systemd-logind[806]: New session 25 of user zuul.
Nov 24 21:54:25 compute-0 systemd[1]: Started Session 25 of User zuul.
Nov 24 21:54:25 compute-0 sshd-session[183634]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:54:25 compute-0 sshd-session[183637]: Received disconnect from 192.168.122.30 port 50142:11: disconnected by user
Nov 24 21:54:25 compute-0 sshd-session[183637]: Disconnected from user zuul 192.168.122.30 port 50142
Nov 24 21:54:25 compute-0 sshd-session[183634]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:54:25 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Nov 24 21:54:25 compute-0 systemd-logind[806]: Session 25 logged out. Waiting for processes to exit.
Nov 24 21:54:25 compute-0 systemd-logind[806]: Removed session 25.
Nov 24 21:54:26 compute-0 python3.9[183787]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:54:27 compute-0 python3.9[183908]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021266.0500574-1229-213596021945640/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:27 compute-0 python3.9[184058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:54:28 compute-0 python3.9[184134]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:29 compute-0 podman[184258]: 2025-11-24 21:54:29.01608977 +0000 UTC m=+0.129192357 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 24 21:54:29 compute-0 sshd-session[184310]: Connection closed by 45.148.10.240 port 41866
Nov 24 21:54:29 compute-0 python3.9[184301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:54:29 compute-0 python3.9[184431]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021268.574102-1229-242032004568643/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:30 compute-0 podman[184432]: 2025-11-24 21:54:30.01090227 +0000 UTC m=+0.073302113 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 21:54:30 compute-0 python3.9[184600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:54:31 compute-0 python3.9[184721]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021270.0853193-1229-116407347217026/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:31 compute-0 python3.9[184871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:54:32 compute-0 python3.9[184992]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021271.3456378-1229-79959725091860/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:32 compute-0 python3.9[185142]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:54:33 compute-0 python3.9[185263]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021272.5180383-1229-211727635436470/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:34 compute-0 sudo[185413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unwkimqxfgtdphcjnsqvhzsjoazvnxvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021273.8927236-1312-17559430759440/AnsiballZ_file.py'
Nov 24 21:54:34 compute-0 sudo[185413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:34 compute-0 python3.9[185415]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:54:34 compute-0 sudo[185413]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:34 compute-0 sudo[185565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enkiozdvcfcoygwikvbmrwmdweqkkygk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021274.6700685-1320-30909914714205/AnsiballZ_copy.py'
Nov 24 21:54:34 compute-0 sudo[185565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:35 compute-0 python3.9[185567]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:54:35 compute-0 sudo[185565]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:35 compute-0 sudo[185717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlhmiriyhshxnfmrcbxriaheiimxwmuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021275.4700654-1328-267513925275025/AnsiballZ_stat.py'
Nov 24 21:54:35 compute-0 sudo[185717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:36 compute-0 python3.9[185719]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:54:36 compute-0 sudo[185717]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:36 compute-0 sudo[185869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bioilsxnrurfctqririzwdhpbiltbopx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021276.3002782-1336-141980541616986/AnsiballZ_stat.py'
Nov 24 21:54:36 compute-0 sudo[185869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:36 compute-0 python3.9[185871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:54:36 compute-0 sudo[185869]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:37 compute-0 sudo[185992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbwplhkfvaqawqjknuskrmrgrcsxujfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021276.3002782-1336-141980541616986/AnsiballZ_copy.py'
Nov 24 21:54:37 compute-0 sudo[185992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:37 compute-0 python3.9[185994]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764021276.3002782-1336-141980541616986/.source _original_basename=.h2618ydy follow=False checksum=6cc42ede880127f970140d2916c57c5dc835191f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 24 21:54:37 compute-0 sudo[185992]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:38 compute-0 python3.9[186146]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:54:39 compute-0 python3.9[186298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:54:39 compute-0 python3.9[186419]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021278.7940536-1362-37895531544549/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:40 compute-0 python3.9[186569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:54:41 compute-0 python3.9[186690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021280.2381718-1377-257878971585469/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:54:42 compute-0 sudo[186840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbcwfhizqzkpxdkxawxxqtjuuqflkodf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021281.8602853-1394-231868166746209/AnsiballZ_container_config_data.py'
Nov 24 21:54:42 compute-0 sudo[186840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:42 compute-0 python3.9[186842]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 24 21:54:42 compute-0 sudo[186840]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:43 compute-0 sudo[186992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fezpjnakatndbhanyrwxokdizvtuygeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021282.7750745-1403-167921635215033/AnsiballZ_container_config_hash.py'
Nov 24 21:54:43 compute-0 sudo[186992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:43 compute-0 python3.9[186994]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 21:54:43 compute-0 sudo[186992]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:44 compute-0 sudo[187161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exdnsbyvtnvkpurmjckrywhansxymmwp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021283.7297344-1413-260244497580837/AnsiballZ_edpm_container_manage.py'
Nov 24 21:54:44 compute-0 sudo[187161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:44 compute-0 podman[187118]: 2025-11-24 21:54:44.114027435 +0000 UTC m=+0.058799184 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 21:54:44 compute-0 python3[187166]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 21:54:44 compute-0 podman[187206]: 2025-11-24 21:54:44.667104485 +0000 UTC m=+0.080172094 container create 8a42b1e04b14257945e784a1561820f135da31df35a4d3326f03003da0b3b447 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 21:54:44 compute-0 podman[187206]: 2025-11-24 21:54:44.628327592 +0000 UTC m=+0.041395241 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 21:54:44 compute-0 python3[187166]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 24 21:54:44 compute-0 sudo[187161]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:45 compute-0 sudo[187393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzeksanybkzembytvplzuuhbnyxsadko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021285.1292322-1421-115498995044250/AnsiballZ_stat.py'
Nov 24 21:54:45 compute-0 sudo[187393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:45 compute-0 python3.9[187395]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:54:45 compute-0 sudo[187393]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:46 compute-0 sudo[187547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irwzhgfcrxjftrjtvvysycmuoqbtcfuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021286.2670507-1433-152088525385618/AnsiballZ_container_config_data.py'
Nov 24 21:54:46 compute-0 sudo[187547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:46 compute-0 python3.9[187549]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 24 21:54:46 compute-0 sudo[187547]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:47 compute-0 sudo[187699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waylosfepsrgudcezxbwubqwtrvcjbxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021287.210134-1442-184660210009537/AnsiballZ_container_config_hash.py'
Nov 24 21:54:47 compute-0 sudo[187699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:47 compute-0 python3.9[187701]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 21:54:47 compute-0 sudo[187699]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:48 compute-0 sudo[187851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdiaghzrtknnkdribdiogytkhuyiyuve ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021288.2135565-1452-113043700036090/AnsiballZ_edpm_container_manage.py'
Nov 24 21:54:48 compute-0 sudo[187851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:48 compute-0 python3[187853]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 21:54:49 compute-0 podman[187889]: 2025-11-24 21:54:49.127384385 +0000 UTC m=+0.074235829 container create 3a7819e5e3e18baf93bf8d8278ea11fd8329509becb6a0de1f59d564f1296b87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 24 21:54:49 compute-0 podman[187889]: 2025-11-24 21:54:49.092455688 +0000 UTC m=+0.039307162 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 21:54:49 compute-0 python3[187853]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 24 21:54:49 compute-0 sudo[187851]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:49 compute-0 sudo[188077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaqbnolfvltxjbpikfnxlxhbkdixiasi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021289.5555985-1460-159762411362152/AnsiballZ_stat.py'
Nov 24 21:54:49 compute-0 sudo[188077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:50 compute-0 python3.9[188079]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:54:50 compute-0 sudo[188077]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:50 compute-0 sudo[188231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dskjxeiayopuwnobpjljrbbhrklbdsmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021290.3705313-1469-39970790672966/AnsiballZ_file.py'
Nov 24 21:54:50 compute-0 sudo[188231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:50 compute-0 python3.9[188233]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:54:50 compute-0 sudo[188231]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:51 compute-0 sudo[188382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlgupbdhwowsranbjeobjddipwlzdncx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021291.023238-1469-247891494599602/AnsiballZ_copy.py'
Nov 24 21:54:51 compute-0 sudo[188382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:51 compute-0 python3.9[188384]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764021291.023238-1469-247891494599602/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:54:51 compute-0 sudo[188382]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:51 compute-0 sudo[188458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwppojaxmcqnhpljebhwwcvkjgppxqmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021291.023238-1469-247891494599602/AnsiballZ_systemd.py'
Nov 24 21:54:52 compute-0 sudo[188458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:52 compute-0 python3.9[188460]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:54:52 compute-0 systemd[1]: Reloading.
Nov 24 21:54:52 compute-0 systemd-rc-local-generator[188486]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:54:52 compute-0 systemd-sysv-generator[188490]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:54:52 compute-0 sudo[188458]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:53 compute-0 sudo[188568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzqczmasqonwxtxxyeioacbzcxslhnjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021291.023238-1469-247891494599602/AnsiballZ_systemd.py'
Nov 24 21:54:53 compute-0 sudo[188568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:53 compute-0 python3.9[188570]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:54:53 compute-0 systemd[1]: Reloading.
Nov 24 21:54:53 compute-0 systemd-rc-local-generator[188598]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:54:53 compute-0 systemd-sysv-generator[188602]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:54:53 compute-0 systemd[1]: Starting nova_compute container...
Nov 24 21:54:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5400d576c37ef71ca03c2f6dad30474af350f6f2c4a632c3d43db850903df1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 21:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5400d576c37ef71ca03c2f6dad30474af350f6f2c4a632c3d43db850903df1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 21:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5400d576c37ef71ca03c2f6dad30474af350f6f2c4a632c3d43db850903df1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 21:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5400d576c37ef71ca03c2f6dad30474af350f6f2c4a632c3d43db850903df1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 21:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5400d576c37ef71ca03c2f6dad30474af350f6f2c4a632c3d43db850903df1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 21:54:53 compute-0 podman[188609]: 2025-11-24 21:54:53.881596227 +0000 UTC m=+0.123802976 container init 3a7819e5e3e18baf93bf8d8278ea11fd8329509becb6a0de1f59d564f1296b87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:54:53 compute-0 podman[188609]: 2025-11-24 21:54:53.893302949 +0000 UTC m=+0.135509658 container start 3a7819e5e3e18baf93bf8d8278ea11fd8329509becb6a0de1f59d564f1296b87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 24 21:54:53 compute-0 podman[188609]: nova_compute
Nov 24 21:54:53 compute-0 nova_compute[188624]: + sudo -E kolla_set_configs
Nov 24 21:54:53 compute-0 systemd[1]: Started nova_compute container.
Nov 24 21:54:53 compute-0 sudo[188568]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Validating config file
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Copying service configuration files
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Deleting /etc/ceph
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Creating directory /etc/ceph
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Writing out command to execute
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 21:54:53 compute-0 nova_compute[188624]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 21:54:53 compute-0 nova_compute[188624]: ++ cat /run_command
Nov 24 21:54:54 compute-0 nova_compute[188624]: + CMD=nova-compute
Nov 24 21:54:54 compute-0 nova_compute[188624]: + ARGS=
Nov 24 21:54:54 compute-0 nova_compute[188624]: + sudo kolla_copy_cacerts
Nov 24 21:54:54 compute-0 nova_compute[188624]: + [[ ! -n '' ]]
Nov 24 21:54:54 compute-0 nova_compute[188624]: + . kolla_extend_start
Nov 24 21:54:54 compute-0 nova_compute[188624]: Running command: 'nova-compute'
Nov 24 21:54:54 compute-0 nova_compute[188624]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 21:54:54 compute-0 nova_compute[188624]: + umask 0022
Nov 24 21:54:54 compute-0 nova_compute[188624]: + exec nova-compute
Nov 24 21:54:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:54:54.544 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:54:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:54:54.544 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:54:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:54:54.544 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:54:54 compute-0 python3.9[188786]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:54:55 compute-0 nova_compute[188624]: 2025-11-24 21:54:55.860 188628 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 21:54:55 compute-0 nova_compute[188624]: 2025-11-24 21:54:55.861 188628 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 21:54:55 compute-0 nova_compute[188624]: 2025-11-24 21:54:55.861 188628 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 21:54:55 compute-0 nova_compute[188624]: 2025-11-24 21:54:55.861 188628 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 24 21:54:55 compute-0 python3.9[188936]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:54:55 compute-0 nova_compute[188624]: 2025-11-24 21:54:55.988 188628 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.015 188628 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.016 188628 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.603 188628 INFO nova.virt.driver [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.718 188628 INFO nova.compute.provider_config [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.730 188628 DEBUG oslo_concurrency.lockutils [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.731 188628 DEBUG oslo_concurrency.lockutils [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.731 188628 DEBUG oslo_concurrency.lockutils [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.731 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.731 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.731 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.731 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.732 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.732 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.732 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.732 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.732 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.732 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.732 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.733 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.733 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.733 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.733 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.733 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.733 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.733 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.734 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.734 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.734 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.734 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.734 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.734 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.734 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.734 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.735 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.735 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.735 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.735 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.735 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.735 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.735 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.736 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.736 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.736 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.736 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.736 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.736 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.736 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.737 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.737 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.737 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.737 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.737 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.737 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.737 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.738 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.738 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.738 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.738 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.738 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.738 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.738 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.739 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.739 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.739 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.739 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.739 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.739 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.739 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.739 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.740 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.740 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.740 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.740 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.740 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.740 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.740 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.740 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.741 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.741 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.741 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.741 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.741 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.741 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.741 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.741 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.742 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.742 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.742 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.742 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.742 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.742 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.742 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.743 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.743 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.743 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.743 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.743 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.743 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.743 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.743 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.744 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.744 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.744 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.744 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.744 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.744 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.744 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.744 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.745 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.745 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.745 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.745 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.745 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.745 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.745 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.746 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.746 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.746 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.746 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.746 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.746 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.746 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.746 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.747 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.747 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.747 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.747 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.747 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.747 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.747 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.747 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.748 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.748 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.748 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.748 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.748 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.748 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.748 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.748 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.749 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.749 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.749 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.749 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.749 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.749 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.749 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.750 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.750 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.750 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.750 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.750 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.750 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.750 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.750 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.751 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.751 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.751 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.751 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.751 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.751 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.751 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.751 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.752 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.752 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.752 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.752 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.752 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.752 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.752 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.753 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.753 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.753 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.753 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.753 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.753 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.753 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.754 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.754 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.754 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.754 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.754 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.754 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.754 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.754 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.755 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.755 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.755 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.755 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.755 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.755 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.755 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.756 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.756 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.756 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.756 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.756 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.756 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.756 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.756 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.757 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.757 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.757 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.757 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.757 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.757 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.757 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.758 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.758 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.758 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.758 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.758 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.758 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.758 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.758 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.759 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.759 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.759 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.759 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.759 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.759 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.759 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.759 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.760 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.760 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.760 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.760 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.760 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.760 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.760 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.761 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.761 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.761 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.761 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.761 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.761 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.761 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.761 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.762 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.762 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.762 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.762 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.762 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.762 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.762 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.762 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.763 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.763 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.763 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.763 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.763 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.763 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.763 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.764 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.764 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.764 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.764 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.764 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.764 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.764 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.764 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.765 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.765 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.765 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.765 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.765 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.765 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.765 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.766 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.766 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.766 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.766 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.766 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.766 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.766 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.766 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.767 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.767 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.767 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.767 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.767 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.767 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.767 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.768 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.768 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.768 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.768 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.768 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.768 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.768 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.768 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.769 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.769 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.769 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.769 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.769 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.769 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.769 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.769 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.770 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.770 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.770 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.770 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.770 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.770 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.770 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.771 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.771 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.771 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.771 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.771 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.771 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.771 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.771 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.772 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.772 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.772 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.772 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.772 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.772 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.773 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.773 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.773 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.773 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.773 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.773 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.773 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.774 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.774 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.774 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.774 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.774 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.774 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.774 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.774 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.775 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.775 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.775 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.775 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.775 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.775 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.775 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.776 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.776 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.776 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.776 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.776 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.776 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.776 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.776 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.777 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.777 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.777 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.777 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.777 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.777 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.777 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.778 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.778 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.778 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.778 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.778 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.778 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 python3.9[189090]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.779 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.779 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.779 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.779 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.779 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.779 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.779 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.779 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.780 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.780 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.780 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.780 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.780 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.780 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.780 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.781 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.781 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.781 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.781 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.781 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.781 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.781 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.782 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.782 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.782 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.782 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.782 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.782 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.782 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.782 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.783 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.783 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.783 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.783 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.783 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.783 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.783 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.784 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.784 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.784 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.784 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.784 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.784 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.784 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.784 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.785 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.785 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.785 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.785 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.785 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.785 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.785 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.786 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.786 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.786 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.786 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.786 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.786 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.786 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.786 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.787 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.787 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.787 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.787 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.787 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.787 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.787 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.788 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.788 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.788 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.788 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.788 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.788 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.788 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.788 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.789 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.789 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.789 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.789 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.789 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.789 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.789 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.790 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.790 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.790 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.790 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.790 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.790 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.790 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.790 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.791 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.791 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.791 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.791 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.791 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.791 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.791 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.792 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.792 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.792 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.792 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.792 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.792 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.792 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.793 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.793 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.793 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.793 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.793 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.793 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.793 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.793 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.794 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.794 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.794 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.794 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.794 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.794 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.794 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.795 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.795 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.795 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.795 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.795 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.795 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.795 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.796 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.796 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.796 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.796 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.796 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.796 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.796 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.797 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.797 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.797 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.797 188628 WARNING oslo_config.cfg [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 21:54:56 compute-0 nova_compute[188624]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 21:54:56 compute-0 nova_compute[188624]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 21:54:56 compute-0 nova_compute[188624]: and ``live_migration_inbound_addr`` respectively.
Nov 24 21:54:56 compute-0 nova_compute[188624]: ).  Its value may be silently ignored in the future.
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.797 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.797 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.797 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.798 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.798 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.798 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.798 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.798 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.798 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.798 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.799 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.799 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.799 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.799 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.799 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.799 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.799 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.800 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.800 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.800 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.800 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.800 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.800 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.800 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.800 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.801 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.801 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.801 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.801 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.801 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.801 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.802 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.802 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.802 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.802 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.802 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.802 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.802 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.803 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.803 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.803 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.803 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.803 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.803 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.803 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.803 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.804 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.804 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.804 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.804 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.804 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.804 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.804 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.805 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.805 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.805 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.805 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.805 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.805 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.805 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.805 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.806 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.806 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.806 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.806 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.806 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.806 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.806 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.807 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.807 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.807 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.807 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.807 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.807 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.807 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.807 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.808 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.808 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.808 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.808 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.808 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.808 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.808 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.809 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.809 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.809 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.809 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.809 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.809 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.809 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.810 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.810 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.810 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.810 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.810 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.810 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.810 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.811 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.811 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.811 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.811 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.811 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.811 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.811 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.811 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.812 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.812 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.812 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.812 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.812 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.812 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.812 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.813 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.813 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.813 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.813 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.813 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.813 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.813 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.813 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.814 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.814 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.814 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.814 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.814 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.814 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.814 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.815 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.815 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.815 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.815 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.815 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.815 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.815 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.815 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.816 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.816 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.816 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.816 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.816 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.816 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.817 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.817 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.817 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.817 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.817 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.817 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.817 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.817 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.818 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.818 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.818 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.818 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.818 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.818 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.818 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.819 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.819 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.819 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.819 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.819 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.819 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.819 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.820 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.820 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.820 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.820 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.820 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.820 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.820 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.820 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.821 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.821 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.821 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.821 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.821 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.821 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.821 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.822 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.822 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.822 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.822 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.822 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.822 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.822 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.823 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.823 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.823 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.823 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.823 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.823 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.823 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.824 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.824 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.824 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.824 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.824 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.824 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.824 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.825 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.825 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.825 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.825 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.825 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.825 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.825 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.825 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.826 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.826 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.826 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.826 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.826 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.826 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.826 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.827 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.827 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.827 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.827 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.827 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.827 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.827 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.827 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.828 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.828 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.828 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.828 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.828 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.828 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.828 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.829 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.829 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.829 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.829 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.829 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.829 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.829 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.829 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.830 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.830 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.830 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.830 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.830 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.830 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.830 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.831 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.831 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.831 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.831 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.831 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.831 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.831 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.832 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.832 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.832 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.832 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.832 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.832 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.832 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.833 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.833 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.833 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.833 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.833 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.833 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.833 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.833 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.834 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.834 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.834 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.834 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.834 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.834 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.834 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.835 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.835 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.835 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.835 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.835 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.835 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.835 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.835 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.836 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.836 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.836 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.836 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.836 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.836 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.836 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.837 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.837 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.837 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.837 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.837 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.837 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.837 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.838 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.838 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.838 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.838 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.838 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.838 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.838 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.838 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.839 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.839 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.839 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.839 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.839 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.839 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.839 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.840 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.840 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.840 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.840 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.840 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.840 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.840 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.841 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.841 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.841 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.841 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.841 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.841 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.841 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.841 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.842 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.842 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.842 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.842 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.842 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.842 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.842 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.843 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.843 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.843 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.843 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.843 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.843 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.843 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.844 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.844 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.844 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.844 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.844 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.844 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.844 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.844 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.845 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.845 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.845 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.845 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.845 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.845 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.845 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.846 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.846 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.846 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.846 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.846 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.846 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.846 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.846 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.847 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.847 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.847 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.847 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.847 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.847 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.847 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.847 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.848 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.848 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.848 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.848 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.848 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.848 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.848 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.849 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.849 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.849 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.849 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.849 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.849 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.849 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.850 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.850 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.850 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.850 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.850 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.850 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.850 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.850 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.851 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.851 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.851 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.851 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.851 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.851 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.851 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.852 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.852 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.852 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.852 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.852 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.852 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.852 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.852 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.853 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.853 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.853 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.853 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.853 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.853 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.853 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.854 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.854 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.854 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.854 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.854 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.854 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.854 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.855 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.855 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.855 188628 DEBUG oslo_service.service [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.855 188628 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.869 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.869 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.869 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.870 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 24 21:54:56 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 21:54:56 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.951 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f373884f700> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.954 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f373884f700> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.955 188628 INFO nova.virt.libvirt.driver [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Connection event '1' reason 'None'
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.979 188628 WARNING nova.virt.libvirt.driver [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 24 21:54:56 compute-0 nova_compute[188624]: 2025-11-24 21:54:56.980 188628 DEBUG nova.virt.libvirt.volume.mount [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 24 21:54:57 compute-0 sudo[189300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwrsldtiyyrenhtxjvzvhlrymwukpoiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021297.0758553-1529-223760806525670/AnsiballZ_podman_container.py'
Nov 24 21:54:57 compute-0 sudo[189300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:57 compute-0 nova_compute[188624]: 2025-11-24 21:54:57.890 188628 INFO nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 21:54:57 compute-0 nova_compute[188624]: 
Nov 24 21:54:57 compute-0 nova_compute[188624]:   <host>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <uuid>c15acc49-e00e-4e10-af5a-4da075840387</uuid>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <cpu>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <arch>x86_64</arch>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model>EPYC-Rome-v4</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <vendor>AMD</vendor>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <microcode version='16777317'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <signature family='23' model='49' stepping='0'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='x2apic'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='tsc-deadline'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='osxsave'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='hypervisor'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='tsc_adjust'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='spec-ctrl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='stibp'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='arch-capabilities'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='ssbd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='cmp_legacy'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='topoext'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='virt-ssbd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='lbrv'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='tsc-scale'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='vmcb-clean'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='pause-filter'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='pfthreshold'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='svme-addr-chk'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='rdctl-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='skip-l1dfl-vmentry'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='mds-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature name='pschange-mc-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <pages unit='KiB' size='4'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <pages unit='KiB' size='2048'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <pages unit='KiB' size='1048576'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </cpu>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <power_management>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <suspend_mem/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <suspend_disk/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <suspend_hybrid/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </power_management>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <iommu support='no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <migration_features>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <live/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <uri_transports>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <uri_transport>tcp</uri_transport>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <uri_transport>rdma</uri_transport>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </uri_transports>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </migration_features>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <topology>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <cells num='1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <cell id='0'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:           <memory unit='KiB'>7864312</memory>
Nov 24 21:54:57 compute-0 nova_compute[188624]:           <pages unit='KiB' size='4'>1966078</pages>
Nov 24 21:54:57 compute-0 nova_compute[188624]:           <pages unit='KiB' size='2048'>0</pages>
Nov 24 21:54:57 compute-0 nova_compute[188624]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 24 21:54:57 compute-0 nova_compute[188624]:           <distances>
Nov 24 21:54:57 compute-0 nova_compute[188624]:             <sibling id='0' value='10'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:           </distances>
Nov 24 21:54:57 compute-0 nova_compute[188624]:           <cpus num='8'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:           </cpus>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         </cell>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </cells>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </topology>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <cache>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </cache>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <secmodel>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model>selinux</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <doi>0</doi>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </secmodel>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <secmodel>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model>dac</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <doi>0</doi>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </secmodel>
Nov 24 21:54:57 compute-0 nova_compute[188624]:   </host>
Nov 24 21:54:57 compute-0 nova_compute[188624]: 
Nov 24 21:54:57 compute-0 nova_compute[188624]:   <guest>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <os_type>hvm</os_type>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <arch name='i686'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <wordsize>32</wordsize>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <domain type='qemu'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <domain type='kvm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </arch>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <features>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <pae/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <nonpae/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <acpi default='on' toggle='yes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <apic default='on' toggle='no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <cpuselection/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <deviceboot/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <disksnapshot default='on' toggle='no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <externalSnapshot/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </features>
Nov 24 21:54:57 compute-0 nova_compute[188624]:   </guest>
Nov 24 21:54:57 compute-0 nova_compute[188624]: 
Nov 24 21:54:57 compute-0 nova_compute[188624]:   <guest>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <os_type>hvm</os_type>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <arch name='x86_64'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <wordsize>64</wordsize>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <domain type='qemu'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <domain type='kvm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </arch>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <features>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <acpi default='on' toggle='yes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <apic default='on' toggle='no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <cpuselection/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <deviceboot/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <disksnapshot default='on' toggle='no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <externalSnapshot/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </features>
Nov 24 21:54:57 compute-0 nova_compute[188624]:   </guest>
Nov 24 21:54:57 compute-0 nova_compute[188624]: 
Nov 24 21:54:57 compute-0 nova_compute[188624]: </capabilities>
Nov 24 21:54:57 compute-0 nova_compute[188624]: 
Nov 24 21:54:57 compute-0 nova_compute[188624]: 2025-11-24 21:54:57.902 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 21:54:57 compute-0 nova_compute[188624]: 2025-11-24 21:54:57.928 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 21:54:57 compute-0 nova_compute[188624]: <domainCapabilities>
Nov 24 21:54:57 compute-0 nova_compute[188624]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 21:54:57 compute-0 nova_compute[188624]:   <domain>kvm</domain>
Nov 24 21:54:57 compute-0 nova_compute[188624]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 21:54:57 compute-0 nova_compute[188624]:   <arch>i686</arch>
Nov 24 21:54:57 compute-0 nova_compute[188624]:   <vcpu max='240'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:   <iothreads supported='yes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:   <os supported='yes'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <enum name='firmware'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <loader supported='yes'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <value>rom</value>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <value>pflash</value>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <enum name='readonly'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <value>yes</value>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <value>no</value>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <enum name='secure'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <value>no</value>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </loader>
Nov 24 21:54:57 compute-0 nova_compute[188624]:   </os>
Nov 24 21:54:57 compute-0 nova_compute[188624]:   <cpu>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <mode name='host-passthrough' supported='yes'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <enum name='hostPassthroughMigratable'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <value>on</value>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <value>off</value>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <mode name='maximum' supported='yes'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <enum name='maximumMigratable'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <value>on</value>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <value>off</value>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <mode name='host-model' supported='yes'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <vendor>AMD</vendor>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='x2apic'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='hypervisor'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='stibp'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='ssbd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='overflow-recov'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='succor'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='ibrs'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='lbrv'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='tsc-scale'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='flushbyasid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='pause-filter'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='pfthreshold'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <feature policy='disable' name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:57 compute-0 nova_compute[188624]:     <mode name='custom' supported='yes'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Broadwell'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Broadwell-IBRS'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Broadwell-noTSX'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v2'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v3'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v4'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Cooperlake'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Cooperlake-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Cooperlake-v2'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Denverton'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Denverton-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Denverton-v2'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Denverton-v3'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Dhyana-v2'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='EPYC-Genoa'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amd-psfd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='auto-ibrs'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='no-nested-data-bp'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='null-sel-clr-base'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='stibp-always-on'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amd-psfd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='auto-ibrs'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='no-nested-data-bp'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='null-sel-clr-base'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='stibp-always-on'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='EPYC-Milan'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='EPYC-Milan-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='EPYC-Milan-v2'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amd-psfd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='no-nested-data-bp'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='null-sel-clr-base'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='stibp-always-on'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome-v2'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome-v3'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='EPYC-v3'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='EPYC-v4'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='GraniteRapids'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-fp16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='prefetchiti'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='GraniteRapids-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-fp16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='prefetchiti'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='GraniteRapids-v2'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-fp16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx10'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx10-128'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx10-256'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx10-512'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='prefetchiti'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Haswell'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Haswell-IBRS'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Haswell-noTSX'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Haswell-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Haswell-v2'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Haswell-v3'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Haswell-v4'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v2'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v3'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v4'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v5'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v6'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v7'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='IvyBridge'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='IvyBridge-IBRS'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='IvyBridge-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='IvyBridge-v2'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='KnightsMill'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-4fmaps'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-4vnniw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512er'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512pf'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='KnightsMill-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-4fmaps'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-4vnniw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512er'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512pf'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Opteron_G4'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Opteron_G4-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Opteron_G5'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='tbm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Opteron_G5-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='tbm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:57 compute-0 python3.9[189302]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids-v2'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids-v3'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='SierraForest'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-ne-convert'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-vnni-int8'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='cmpccxadd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='SierraForest-v1'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-ifma'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-ne-convert'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='avx-vnni-int8'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='cmpccxadd'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:57 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v5'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='athlon'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='athlon-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='core2duo'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='core2duo-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='coreduo'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='coreduo-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='n270'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='n270-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='phenom'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='phenom-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </cpu>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <memoryBacking supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <enum name='sourceType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>file</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>anonymous</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>memfd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </memoryBacking>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <devices>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <disk supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='diskDevice'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>disk</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>cdrom</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>floppy</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>lun</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='bus'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>ide</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>fdc</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>scsi</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>usb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>sata</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-non-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </disk>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <graphics supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vnc</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>egl-headless</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>dbus</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </graphics>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <video supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='modelType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vga</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>cirrus</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>none</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>bochs</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>ramfb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </video>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <hostdev supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='mode'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>subsystem</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='startupPolicy'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>default</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>mandatory</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>requisite</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>optional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='subsysType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>usb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pci</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>scsi</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='capsType'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='pciBackend'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </hostdev>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <rng supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-non-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendModel'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>random</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>egd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>builtin</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </rng>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <filesystem supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='driverType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>path</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>handle</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtiofs</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </filesystem>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <tpm supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tpm-tis</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tpm-crb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendModel'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>emulator</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>external</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendVersion'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>2.0</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </tpm>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <redirdev supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='bus'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>usb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </redirdev>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <channel supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pty</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>unix</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </channel>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <crypto supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>qemu</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendModel'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>builtin</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </crypto>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <interface supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>default</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>passt</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </interface>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <panic supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>isa</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>hyperv</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </panic>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <console supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>null</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vc</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pty</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>dev</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>file</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pipe</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>stdio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>udp</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tcp</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>unix</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>qemu-vdagent</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>dbus</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </console>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </devices>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <features>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <gic supported='no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <vmcoreinfo supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <genid supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <backingStoreInput supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <backup supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <async-teardown supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <ps2 supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <sev supported='no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <sgx supported='no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <hyperv supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='features'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>relaxed</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vapic</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>spinlocks</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vpindex</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>runtime</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>synic</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>stimer</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>reset</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vendor_id</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>frequencies</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>reenlightenment</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tlbflush</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>ipi</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>avic</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>emsr_bitmap</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>xmm_input</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <defaults>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <spinlocks>4095</spinlocks>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <stimer_direct>on</stimer_direct>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </defaults>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </hyperv>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <launchSecurity supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='sectype'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tdx</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </launchSecurity>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </features>
Nov 24 21:54:58 compute-0 nova_compute[188624]: </domainCapabilities>
Nov 24 21:54:58 compute-0 nova_compute[188624]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:57.935 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 21:54:58 compute-0 nova_compute[188624]: <domainCapabilities>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <domain>kvm</domain>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <arch>i686</arch>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <vcpu max='4096'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <iothreads supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <os supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <enum name='firmware'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <loader supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>rom</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pflash</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='readonly'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>yes</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>no</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='secure'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>no</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </loader>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </os>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <cpu>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <mode name='host-passthrough' supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='hostPassthroughMigratable'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>on</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>off</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <mode name='maximum' supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='maximumMigratable'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>on</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>off</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <mode name='host-model' supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <vendor>AMD</vendor>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='x2apic'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='hypervisor'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='stibp'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='ssbd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='overflow-recov'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='succor'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='ibrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='lbrv'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='tsc-scale'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='flushbyasid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='pause-filter'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='pfthreshold'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='disable' name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <mode name='custom' supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-noTSX'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cooperlake'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cooperlake-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cooperlake-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Denverton'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Denverton-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Denverton-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Denverton-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Dhyana-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Genoa'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amd-psfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='auto-ibrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='no-nested-data-bp'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='null-sel-clr-base'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='stibp-always-on'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amd-psfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='auto-ibrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='no-nested-data-bp'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='null-sel-clr-base'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='stibp-always-on'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Milan'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Milan-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Milan-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amd-psfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='no-nested-data-bp'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='null-sel-clr-base'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='stibp-always-on'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='GraniteRapids'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='prefetchiti'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='GraniteRapids-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='prefetchiti'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='GraniteRapids-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx10'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx10-128'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx10-256'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx10-512'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='prefetchiti'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-noTSX'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v5'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v6'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v7'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='IvyBridge'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='IvyBridge-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='IvyBridge-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='IvyBridge-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='KnightsMill'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-4fmaps'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-4vnniw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512er'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512pf'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='KnightsMill-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-4fmaps'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-4vnniw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512er'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512pf'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Opteron_G4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Opteron_G4-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Opteron_G5'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tbm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Opteron_G5-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tbm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SierraForest'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-ne-convert'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cmpccxadd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SierraForest-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-ne-convert'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cmpccxadd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v5'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='athlon'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='athlon-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='core2duo'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='core2duo-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='coreduo'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='coreduo-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='n270'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='n270-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='phenom'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='phenom-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </cpu>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <memoryBacking supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <enum name='sourceType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>file</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>anonymous</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>memfd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </memoryBacking>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <devices>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <disk supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='diskDevice'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>disk</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>cdrom</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>floppy</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>lun</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='bus'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>fdc</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>scsi</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>usb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>sata</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-non-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </disk>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <graphics supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vnc</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>egl-headless</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>dbus</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </graphics>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <video supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='modelType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vga</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>cirrus</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>none</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>bochs</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>ramfb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </video>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <hostdev supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='mode'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>subsystem</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='startupPolicy'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>default</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>mandatory</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>requisite</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>optional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='subsysType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>usb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pci</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>scsi</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='capsType'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='pciBackend'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </hostdev>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <rng supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-non-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendModel'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>random</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>egd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>builtin</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </rng>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <filesystem supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='driverType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>path</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>handle</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtiofs</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </filesystem>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <tpm supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tpm-tis</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tpm-crb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendModel'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>emulator</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>external</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendVersion'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>2.0</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </tpm>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <redirdev supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='bus'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>usb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </redirdev>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <channel supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pty</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>unix</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </channel>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <crypto supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>qemu</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendModel'>
Nov 24 21:54:58 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>builtin</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </crypto>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <interface supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>default</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>passt</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </interface>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <panic supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>isa</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>hyperv</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </panic>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <console supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>null</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vc</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pty</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>dev</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>file</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pipe</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>stdio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>udp</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tcp</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>unix</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>qemu-vdagent</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>dbus</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </console>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </devices>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <features>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <gic supported='no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <vmcoreinfo supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <genid supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <backingStoreInput supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <backup supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <async-teardown supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <ps2 supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <sev supported='no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <sgx supported='no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <hyperv supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='features'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>relaxed</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vapic</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>spinlocks</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vpindex</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>runtime</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>synic</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>stimer</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>reset</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vendor_id</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>frequencies</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>reenlightenment</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tlbflush</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>ipi</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>avic</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>emsr_bitmap</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>xmm_input</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <defaults>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <spinlocks>4095</spinlocks>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <stimer_direct>on</stimer_direct>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </defaults>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </hyperv>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <launchSecurity supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='sectype'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tdx</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </launchSecurity>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </features>
Nov 24 21:54:58 compute-0 nova_compute[188624]: </domainCapabilities>
Nov 24 21:54:58 compute-0 nova_compute[188624]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.012 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.017 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 24 21:54:58 compute-0 nova_compute[188624]: <domainCapabilities>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <domain>kvm</domain>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <arch>x86_64</arch>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <vcpu max='240'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <iothreads supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <os supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <enum name='firmware'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <loader supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>rom</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pflash</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='readonly'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>yes</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>no</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='secure'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>no</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </loader>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </os>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <cpu>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <mode name='host-passthrough' supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='hostPassthroughMigratable'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>on</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>off</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <mode name='maximum' supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='maximumMigratable'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>on</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>off</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <mode name='host-model' supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <vendor>AMD</vendor>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='x2apic'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='hypervisor'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='stibp'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='ssbd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='overflow-recov'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='succor'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='ibrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='lbrv'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='tsc-scale'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='flushbyasid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='pause-filter'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='pfthreshold'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='disable' name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <mode name='custom' supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-noTSX'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cooperlake'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cooperlake-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 sudo[189300]: pam_unix(sudo:session): session closed for user root
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cooperlake-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Denverton'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Denverton-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Denverton-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Denverton-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Dhyana-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Genoa'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amd-psfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='auto-ibrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='no-nested-data-bp'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='null-sel-clr-base'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='stibp-always-on'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amd-psfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='auto-ibrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='no-nested-data-bp'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='null-sel-clr-base'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='stibp-always-on'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Milan'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Milan-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Milan-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amd-psfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='no-nested-data-bp'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='null-sel-clr-base'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='stibp-always-on'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='GraniteRapids'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='prefetchiti'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='GraniteRapids-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='prefetchiti'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='GraniteRapids-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx10'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx10-128'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx10-256'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx10-512'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='prefetchiti'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-noTSX'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v5'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v6'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v7'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='IvyBridge'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='IvyBridge-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='IvyBridge-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='IvyBridge-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='KnightsMill'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-4fmaps'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-4vnniw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512er'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512pf'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='KnightsMill-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-4fmaps'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-4vnniw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512er'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512pf'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Opteron_G4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Opteron_G4-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Opteron_G5'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tbm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Opteron_G5-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tbm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SierraForest'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-ne-convert'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cmpccxadd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SierraForest-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-ne-convert'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cmpccxadd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v5'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='athlon'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='athlon-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='core2duo'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='core2duo-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='coreduo'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='coreduo-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='n270'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='n270-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='phenom'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='phenom-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </cpu>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <memoryBacking supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <enum name='sourceType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>file</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>anonymous</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>memfd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </memoryBacking>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <devices>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <disk supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='diskDevice'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>disk</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>cdrom</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>floppy</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>lun</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='bus'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>ide</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>fdc</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>scsi</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>usb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>sata</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-non-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </disk>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <graphics supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vnc</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>egl-headless</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>dbus</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </graphics>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <video supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='modelType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vga</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>cirrus</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>none</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>bochs</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>ramfb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </video>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <hostdev supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='mode'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>subsystem</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='startupPolicy'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>default</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>mandatory</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>requisite</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>optional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='subsysType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>usb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pci</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>scsi</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='capsType'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='pciBackend'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </hostdev>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <rng supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-non-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendModel'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>random</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>egd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>builtin</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </rng>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <filesystem supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='driverType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>path</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>handle</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtiofs</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </filesystem>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <tpm supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tpm-tis</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tpm-crb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendModel'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>emulator</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>external</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendVersion'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>2.0</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </tpm>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <redirdev supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='bus'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>usb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </redirdev>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <channel supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pty</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>unix</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </channel>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <crypto supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>qemu</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendModel'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>builtin</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </crypto>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <interface supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>default</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>passt</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </interface>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <panic supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>isa</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>hyperv</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </panic>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <console supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>null</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vc</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pty</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>dev</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>file</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pipe</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>stdio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>udp</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tcp</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>unix</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>qemu-vdagent</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>dbus</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </console>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </devices>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <features>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <gic supported='no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <vmcoreinfo supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <genid supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <backingStoreInput supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <backup supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <async-teardown supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <ps2 supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <sev supported='no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <sgx supported='no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <hyperv supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='features'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>relaxed</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vapic</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>spinlocks</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vpindex</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>runtime</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>synic</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>stimer</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>reset</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vendor_id</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>frequencies</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>reenlightenment</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tlbflush</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>ipi</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>avic</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>emsr_bitmap</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>xmm_input</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <defaults>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <spinlocks>4095</spinlocks>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <stimer_direct>on</stimer_direct>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </defaults>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </hyperv>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <launchSecurity supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='sectype'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tdx</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </launchSecurity>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </features>
Nov 24 21:54:58 compute-0 nova_compute[188624]: </domainCapabilities>
Nov 24 21:54:58 compute-0 nova_compute[188624]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.100 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 21:54:58 compute-0 nova_compute[188624]: <domainCapabilities>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <domain>kvm</domain>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <arch>x86_64</arch>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <vcpu max='4096'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <iothreads supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <os supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <enum name='firmware'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>efi</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <loader supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>rom</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pflash</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='readonly'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>yes</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>no</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='secure'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>yes</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>no</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </loader>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </os>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <cpu>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <mode name='host-passthrough' supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='hostPassthroughMigratable'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>on</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>off</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <mode name='maximum' supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='maximumMigratable'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>on</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>off</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <mode name='host-model' supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <vendor>AMD</vendor>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='x2apic'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='hypervisor'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='stibp'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='ssbd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='overflow-recov'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='succor'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='ibrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='lbrv'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='tsc-scale'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='flushbyasid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='pause-filter'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='pfthreshold'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <feature policy='disable' name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <mode name='custom' supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-noTSX'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Broadwell-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cooperlake'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cooperlake-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Cooperlake-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Denverton'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Denverton-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Denverton-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Denverton-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Dhyana-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Genoa'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amd-psfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='auto-ibrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='no-nested-data-bp'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='null-sel-clr-base'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='stibp-always-on'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amd-psfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='auto-ibrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='no-nested-data-bp'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='null-sel-clr-base'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='stibp-always-on'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Milan'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Milan-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Milan-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amd-psfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='no-nested-data-bp'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='null-sel-clr-base'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='stibp-always-on'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-Rome-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='EPYC-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='GraniteRapids'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='prefetchiti'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='GraniteRapids-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='prefetchiti'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='GraniteRapids-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx10'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx10-128'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx10-256'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx10-512'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='prefetchiti'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-noTSX'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Haswell-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v5'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v6'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Icelake-Server-v7'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='IvyBridge'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='IvyBridge-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='IvyBridge-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='IvyBridge-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='KnightsMill'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-4fmaps'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-4vnniw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512er'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512pf'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='KnightsMill-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-4fmaps'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-4vnniw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512er'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512pf'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Opteron_G4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Opteron_G4-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Opteron_G5'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tbm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Opteron_G5-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fma4'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tbm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xop'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SapphireRapids-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='amx-tile'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-bf16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-fp16'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bitalg'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vbmi2'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrc'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fzrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='la57'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='taa-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='tsx-ldtrk'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xfd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SierraForest'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-ne-convert'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cmpccxadd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='SierraForest-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-ifma'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-ne-convert'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx-vnni-int8'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='bus-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cmpccxadd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fbsdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='fsrs'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ibrs-all'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mcdt-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pbrsb-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='psdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='serialize'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vaes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='vpclmulqdq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Client-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='hle'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='rtm'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Skylake-Server-v5'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512bw'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512cd'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512dq'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512f'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='avx512vl'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='invpcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pcid'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='pku'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='mpx'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v2'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v3'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='core-capability'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='split-lock-detect'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='Snowridge-v4'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='cldemote'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='erms'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='gfni'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdir64b'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='movdiri'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='xsaves'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='athlon'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='athlon-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='core2duo'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='core2duo-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='coreduo'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='coreduo-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='n270'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='n270-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='ss'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='phenom'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <blockers model='phenom-v1'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnow'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <feature name='3dnowext'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </blockers>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </mode>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </cpu>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <memoryBacking supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <enum name='sourceType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>file</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>anonymous</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <value>memfd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </memoryBacking>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <devices>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <disk supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='diskDevice'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>disk</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>cdrom</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>floppy</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>lun</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='bus'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>fdc</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>scsi</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>usb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>sata</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-non-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </disk>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <graphics supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vnc</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>egl-headless</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>dbus</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </graphics>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <video supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='modelType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vga</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>cirrus</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>none</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>bochs</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>ramfb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </video>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <hostdev supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='mode'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>subsystem</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='startupPolicy'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>default</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>mandatory</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>requisite</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>optional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='subsysType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>usb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pci</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>scsi</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='capsType'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='pciBackend'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </hostdev>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <rng supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtio-non-transitional</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendModel'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>random</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>egd</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>builtin</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </rng>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <filesystem supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='driverType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>path</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>handle</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>virtiofs</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </filesystem>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <tpm supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tpm-tis</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tpm-crb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendModel'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>emulator</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>external</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendVersion'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>2.0</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </tpm>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <redirdev supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='bus'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>usb</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </redirdev>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <channel supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pty</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>unix</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </channel>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <crypto supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>qemu</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendModel'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>builtin</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </crypto>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <interface supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='backendType'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>default</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>passt</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </interface>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <panic supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='model'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>isa</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>hyperv</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </panic>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <console supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='type'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>null</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vc</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pty</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>dev</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>file</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>pipe</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>stdio</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>udp</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tcp</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>unix</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>qemu-vdagent</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>dbus</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </console>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </devices>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   <features>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <gic supported='no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <vmcoreinfo supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <genid supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <backingStoreInput supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <backup supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <async-teardown supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <ps2 supported='yes'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <sev supported='no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <sgx supported='no'/>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <hyperv supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='features'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>relaxed</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vapic</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>spinlocks</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vpindex</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>runtime</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>synic</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>stimer</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>reset</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>vendor_id</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>frequencies</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>reenlightenment</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tlbflush</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>ipi</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>avic</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>emsr_bitmap</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>xmm_input</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <defaults>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <spinlocks>4095</spinlocks>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <stimer_direct>on</stimer_direct>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </defaults>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </hyperv>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     <launchSecurity supported='yes'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       <enum name='sectype'>
Nov 24 21:54:58 compute-0 nova_compute[188624]:         <value>tdx</value>
Nov 24 21:54:58 compute-0 nova_compute[188624]:       </enum>
Nov 24 21:54:58 compute-0 nova_compute[188624]:     </launchSecurity>
Nov 24 21:54:58 compute-0 nova_compute[188624]:   </features>
Nov 24 21:54:58 compute-0 nova_compute[188624]: </domainCapabilities>
Nov 24 21:54:58 compute-0 nova_compute[188624]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.173 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.174 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.174 188628 DEBUG nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.174 188628 INFO nova.virt.libvirt.host [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Secure Boot support detected
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.177 188628 INFO nova.virt.libvirt.driver [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.177 188628 INFO nova.virt.libvirt.driver [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.195 188628 DEBUG nova.virt.libvirt.driver [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.240 188628 INFO nova.virt.node [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Determined node identity 7680d048-14f1-46f8-a34d-a7eb32eb11df from /var/lib/nova/compute_id
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.264 188628 WARNING nova.compute.manager [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Compute nodes ['7680d048-14f1-46f8-a34d-a7eb32eb11df'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.305 188628 INFO nova.compute.manager [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.342 188628 WARNING nova.compute.manager [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.343 188628 DEBUG oslo_concurrency.lockutils [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.343 188628 DEBUG oslo_concurrency.lockutils [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.343 188628 DEBUG oslo_concurrency.lockutils [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.344 188628 DEBUG nova.compute.resource_tracker [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 21:54:58 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 21:54:58 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.667 188628 WARNING nova.virt.libvirt.driver [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.669 188628 DEBUG nova.compute.resource_tracker [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6074MB free_disk=72.43341064453125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.669 188628 DEBUG oslo_concurrency.lockutils [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.669 188628 DEBUG oslo_concurrency.lockutils [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.689 188628 WARNING nova.compute.resource_tracker [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] No compute node record for compute-0.ctlplane.example.com:7680d048-14f1-46f8-a34d-a7eb32eb11df: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 7680d048-14f1-46f8-a34d-a7eb32eb11df could not be found.
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.719 188628 INFO nova.compute.resource_tracker [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 7680d048-14f1-46f8-a34d-a7eb32eb11df
Nov 24 21:54:58 compute-0 sudo[189499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaumzhstagsarxmcrmeodtyswmvbmwla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021298.372908-1537-203885621329401/AnsiballZ_systemd.py'
Nov 24 21:54:58 compute-0 sudo[189499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.802 188628 DEBUG nova.compute.resource_tracker [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 21:54:58 compute-0 nova_compute[188624]: 2025-11-24 21:54:58.802 188628 DEBUG nova.compute.resource_tracker [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 21:54:59 compute-0 python3.9[189501]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:54:59 compute-0 nova_compute[188624]: 2025-11-24 21:54:59.809 188628 INFO nova.scheduler.client.report [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] [req-6fe02a48-2c91-4483-b244-075b3a8d817f] Created resource provider record via placement API for resource provider with UUID 7680d048-14f1-46f8-a34d-a7eb32eb11df and name compute-0.ctlplane.example.com.
Nov 24 21:54:59 compute-0 systemd[1]: Stopping nova_compute container...
Nov 24 21:54:59 compute-0 nova_compute[188624]: 2025-11-24 21:54:59.935 188628 DEBUG oslo_concurrency.lockutils [None req-7564ee2f-c2de-4363-aecd-6feb3f833f32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:54:59 compute-0 nova_compute[188624]: 2025-11-24 21:54:59.936 188628 DEBUG oslo_concurrency.lockutils [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 21:54:59 compute-0 nova_compute[188624]: 2025-11-24 21:54:59.936 188628 DEBUG oslo_concurrency.lockutils [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 21:54:59 compute-0 nova_compute[188624]: 2025-11-24 21:54:59.936 188628 DEBUG oslo_concurrency.lockutils [None req-8adb56bd-78b5-4d3b-b0b2-c18e2e4be057 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 21:54:59 compute-0 podman[189503]: 2025-11-24 21:54:59.978214487 +0000 UTC m=+0.132714695 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 21:55:00 compute-0 virtqemud[189136]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 24 21:55:00 compute-0 virtqemud[189136]: hostname: compute-0
Nov 24 21:55:00 compute-0 virtqemud[189136]: End of file while reading data: Input/output error
Nov 24 21:55:00 compute-0 systemd[1]: libpod-3a7819e5e3e18baf93bf8d8278ea11fd8329509becb6a0de1f59d564f1296b87.scope: Deactivated successfully.
Nov 24 21:55:00 compute-0 systemd[1]: libpod-3a7819e5e3e18baf93bf8d8278ea11fd8329509becb6a0de1f59d564f1296b87.scope: Consumed 3.096s CPU time.
Nov 24 21:55:00 compute-0 podman[189512]: 2025-11-24 21:55:00.324969877 +0000 UTC m=+0.440905283 container died 3a7819e5e3e18baf93bf8d8278ea11fd8329509becb6a0de1f59d564f1296b87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a7819e5e3e18baf93bf8d8278ea11fd8329509becb6a0de1f59d564f1296b87-userdata-shm.mount: Deactivated successfully.
Nov 24 21:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d5400d576c37ef71ca03c2f6dad30474af350f6f2c4a632c3d43db850903df1-merged.mount: Deactivated successfully.
Nov 24 21:55:00 compute-0 podman[189512]: 2025-11-24 21:55:00.404505478 +0000 UTC m=+0.520440864 container cleanup 3a7819e5e3e18baf93bf8d8278ea11fd8329509becb6a0de1f59d564f1296b87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:55:00 compute-0 podman[189512]: nova_compute
Nov 24 21:55:00 compute-0 podman[189548]: 2025-11-24 21:55:00.421641548 +0000 UTC m=+0.071272195 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:55:00 compute-0 podman[189579]: nova_compute
Nov 24 21:55:00 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 24 21:55:00 compute-0 systemd[1]: Stopped nova_compute container.
Nov 24 21:55:00 compute-0 systemd[1]: Starting nova_compute container...
Nov 24 21:55:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5400d576c37ef71ca03c2f6dad30474af350f6f2c4a632c3d43db850903df1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 21:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5400d576c37ef71ca03c2f6dad30474af350f6f2c4a632c3d43db850903df1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 21:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5400d576c37ef71ca03c2f6dad30474af350f6f2c4a632c3d43db850903df1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 21:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5400d576c37ef71ca03c2f6dad30474af350f6f2c4a632c3d43db850903df1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 21:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5400d576c37ef71ca03c2f6dad30474af350f6f2c4a632c3d43db850903df1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 21:55:00 compute-0 podman[189593]: 2025-11-24 21:55:00.624814006 +0000 UTC m=+0.122229816 container init 3a7819e5e3e18baf93bf8d8278ea11fd8329509becb6a0de1f59d564f1296b87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 21:55:00 compute-0 podman[189593]: 2025-11-24 21:55:00.636995622 +0000 UTC m=+0.134411372 container start 3a7819e5e3e18baf93bf8d8278ea11fd8329509becb6a0de1f59d564f1296b87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:55:00 compute-0 podman[189593]: nova_compute
Nov 24 21:55:00 compute-0 nova_compute[189608]: + sudo -E kolla_set_configs
Nov 24 21:55:00 compute-0 systemd[1]: Started nova_compute container.
Nov 24 21:55:00 compute-0 sudo[189499]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Validating config file
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Copying service configuration files
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Deleting /etc/ceph
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Creating directory /etc/ceph
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Writing out command to execute
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 21:55:00 compute-0 nova_compute[189608]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 21:55:00 compute-0 nova_compute[189608]: ++ cat /run_command
Nov 24 21:55:00 compute-0 nova_compute[189608]: + CMD=nova-compute
Nov 24 21:55:00 compute-0 nova_compute[189608]: + ARGS=
Nov 24 21:55:00 compute-0 nova_compute[189608]: + sudo kolla_copy_cacerts
Nov 24 21:55:00 compute-0 nova_compute[189608]: + [[ ! -n '' ]]
Nov 24 21:55:00 compute-0 nova_compute[189608]: + . kolla_extend_start
Nov 24 21:55:00 compute-0 nova_compute[189608]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 21:55:00 compute-0 nova_compute[189608]: Running command: 'nova-compute'
Nov 24 21:55:00 compute-0 nova_compute[189608]: + umask 0022
Nov 24 21:55:00 compute-0 nova_compute[189608]: + exec nova-compute
Nov 24 21:55:01 compute-0 sudo[189770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfqqeuifhanhblkyujksghpyrpppuavi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021300.9767797-1546-11480890819608/AnsiballZ_podman_container.py'
Nov 24 21:55:01 compute-0 sudo[189770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:01 compute-0 python3.9[189772]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 21:55:01 compute-0 systemd[1]: Started libpod-conmon-8a42b1e04b14257945e784a1561820f135da31df35a4d3326f03003da0b3b447.scope.
Nov 24 21:55:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ee028154a896bbb2c2ff9bf18808f77ce4607e2e94650f48525be594417870/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 24 21:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ee028154a896bbb2c2ff9bf18808f77ce4607e2e94650f48525be594417870/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 21:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ee028154a896bbb2c2ff9bf18808f77ce4607e2e94650f48525be594417870/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 24 21:55:01 compute-0 podman[189799]: 2025-11-24 21:55:01.978477571 +0000 UTC m=+0.193673929 container init 8a42b1e04b14257945e784a1561820f135da31df35a4d3326f03003da0b3b447 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 21:55:01 compute-0 podman[189799]: 2025-11-24 21:55:01.991657289 +0000 UTC m=+0.206853617 container start 8a42b1e04b14257945e784a1561820f135da31df35a4d3326f03003da0b3b447 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:55:02 compute-0 python3.9[189772]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Applying nova statedir ownership
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 24 21:55:02 compute-0 nova_compute_init[189821]: INFO:nova_statedir:Nova statedir ownership complete
Nov 24 21:55:02 compute-0 systemd[1]: libpod-8a42b1e04b14257945e784a1561820f135da31df35a4d3326f03003da0b3b447.scope: Deactivated successfully.
Nov 24 21:55:02 compute-0 podman[189836]: 2025-11-24 21:55:02.114033192 +0000 UTC m=+0.028287826 container died 8a42b1e04b14257945e784a1561820f135da31df35a4d3326f03003da0b3b447 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a42b1e04b14257945e784a1561820f135da31df35a4d3326f03003da0b3b447-userdata-shm.mount: Deactivated successfully.
Nov 24 21:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-85ee028154a896bbb2c2ff9bf18808f77ce4607e2e94650f48525be594417870-merged.mount: Deactivated successfully.
Nov 24 21:55:02 compute-0 podman[189836]: 2025-11-24 21:55:02.15174023 +0000 UTC m=+0.065994894 container cleanup 8a42b1e04b14257945e784a1561820f135da31df35a4d3326f03003da0b3b447 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 24 21:55:02 compute-0 systemd[1]: libpod-conmon-8a42b1e04b14257945e784a1561820f135da31df35a4d3326f03003da0b3b447.scope: Deactivated successfully.
Nov 24 21:55:02 compute-0 sudo[189770]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:02 compute-0 nova_compute[189608]: 2025-11-24 21:55:02.642 189613 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 21:55:02 compute-0 nova_compute[189608]: 2025-11-24 21:55:02.642 189613 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 21:55:02 compute-0 nova_compute[189608]: 2025-11-24 21:55:02.643 189613 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 21:55:02 compute-0 nova_compute[189608]: 2025-11-24 21:55:02.643 189613 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 24 21:55:02 compute-0 sshd-session[161432]: Connection closed by 192.168.122.30 port 39358
Nov 24 21:55:02 compute-0 sshd-session[161429]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:55:02 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Nov 24 21:55:02 compute-0 systemd[1]: session-24.scope: Consumed 2min 15.230s CPU time.
Nov 24 21:55:02 compute-0 systemd-logind[806]: Session 24 logged out. Waiting for processes to exit.
Nov 24 21:55:02 compute-0 systemd-logind[806]: Removed session 24.
Nov 24 21:55:02 compute-0 nova_compute[189608]: 2025-11-24 21:55:02.761 189613 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 21:55:02 compute-0 nova_compute[189608]: 2025-11-24 21:55:02.784 189613 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 21:55:02 compute-0 nova_compute[189608]: 2025-11-24 21:55:02.784 189613 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.255 189613 INFO nova.virt.driver [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.361 189613 INFO nova.compute.provider_config [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.374 189613 DEBUG oslo_concurrency.lockutils [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.374 189613 DEBUG oslo_concurrency.lockutils [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.375 189613 DEBUG oslo_concurrency.lockutils [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.375 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.375 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.375 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.376 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.376 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.376 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.376 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.376 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.377 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.377 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.377 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.377 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.377 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.377 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.378 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.378 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.378 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.378 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.378 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.378 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.378 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.379 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.379 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.379 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.379 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.379 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.379 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.380 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.380 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.380 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.380 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.380 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.380 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.381 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.381 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.381 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.381 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.381 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.381 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.382 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.382 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.382 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.382 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.382 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.383 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.383 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.383 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.383 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.383 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.384 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.384 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.384 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.384 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.384 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.384 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.385 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.385 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.385 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.385 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.385 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.385 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.386 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.386 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.386 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.386 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.386 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.386 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.386 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.387 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.387 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.387 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.387 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.387 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.387 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.387 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.388 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.388 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.388 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.388 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.388 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.388 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.389 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.389 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.389 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.389 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.389 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.389 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.390 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.390 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.390 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.390 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.390 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.390 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.390 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.390 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.391 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.391 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.391 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.391 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.391 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.391 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.392 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.392 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.392 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.392 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.392 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.392 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.393 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.393 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.393 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.393 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.393 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.393 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.393 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.394 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.394 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.394 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.394 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.394 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.394 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.395 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.395 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.395 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.395 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.395 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.395 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.396 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.396 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.396 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.396 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.396 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.396 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.397 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.397 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.397 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.397 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.397 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.397 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.397 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.397 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.398 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.398 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.398 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.398 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.398 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.398 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.399 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.399 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.399 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.399 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.399 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.399 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.400 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.400 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.400 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.400 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.400 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.400 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.401 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.401 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.401 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.401 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.401 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.401 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.402 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.402 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.402 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.402 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.402 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.402 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.403 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.403 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.403 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.403 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.403 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.404 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.404 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.404 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.404 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.404 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.404 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.405 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.405 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.405 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.405 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.405 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.405 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.405 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.406 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.406 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.406 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.406 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.406 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.406 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.407 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.407 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.407 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.407 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.407 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.407 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.408 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.408 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.408 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.408 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.408 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.408 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.409 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.409 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.409 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.409 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.409 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.409 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.410 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.410 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.410 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.410 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.410 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.411 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.411 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.411 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.411 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.411 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.412 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.412 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.412 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.412 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.412 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.412 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.413 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.413 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.413 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.413 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.413 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.413 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.414 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.414 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.414 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.414 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.414 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.414 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.415 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.415 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.415 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.415 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.415 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.415 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.415 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.416 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.416 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.416 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.416 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.416 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.416 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.416 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.417 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.417 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.417 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.417 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.417 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.417 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.417 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.418 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.418 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.418 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.418 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.418 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.418 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.419 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.419 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.419 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.419 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.419 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.419 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.420 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.420 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.420 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.420 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.420 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.421 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.421 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.421 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.421 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.421 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.421 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.422 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.422 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.422 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.422 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.422 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.423 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.423 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.423 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.423 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.423 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.424 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.424 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.424 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.424 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.424 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.424 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.425 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.425 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.425 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.425 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.425 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.425 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.426 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.426 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.426 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.426 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.426 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.426 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.426 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.427 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.427 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.427 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.427 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.427 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.427 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.427 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.428 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.428 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.428 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.428 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.428 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.429 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.429 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.429 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.429 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.429 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.429 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.429 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.430 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.430 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.430 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.430 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.430 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.430 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.430 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.431 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.431 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.431 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.431 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.431 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.431 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.431 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.432 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.432 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.432 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.432 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.432 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.432 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.433 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.433 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.433 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.433 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.433 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.434 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.434 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.434 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.434 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.434 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.434 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.434 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.435 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.435 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.435 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.435 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.435 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.435 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.435 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.436 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.436 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.436 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.436 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.436 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.436 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.436 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.437 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.437 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.437 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.437 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.437 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.437 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.437 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.438 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.438 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.438 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.438 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.438 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.438 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.438 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.439 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.439 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.439 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.439 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.439 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.439 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.439 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.440 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.440 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.440 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.440 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.440 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.440 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.440 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.441 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.441 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.441 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.441 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.441 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.441 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.441 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.442 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.442 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.442 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.442 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.442 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.442 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.442 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.443 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.443 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.443 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.443 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.443 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.443 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.443 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.443 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.444 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.444 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.444 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.444 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.444 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.444 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.444 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.445 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.445 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.445 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.445 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.445 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.445 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.445 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.446 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.446 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.446 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.446 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.446 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.446 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.446 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.447 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.447 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.447 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.447 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.447 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.447 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.447 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.448 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.448 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.448 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.448 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.448 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.448 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.448 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.449 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.449 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.449 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.449 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.449 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.449 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.450 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.450 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.450 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.450 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.450 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.450 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.450 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.451 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.451 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.451 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.451 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.451 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.451 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.452 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.452 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.452 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.452 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.452 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.452 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.452 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.453 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.453 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.453 189613 WARNING oslo_config.cfg [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 21:55:03 compute-0 nova_compute[189608]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 21:55:03 compute-0 nova_compute[189608]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 21:55:03 compute-0 nova_compute[189608]: and ``live_migration_inbound_addr`` respectively.
Nov 24 21:55:03 compute-0 nova_compute[189608]: ).  Its value may be silently ignored in the future.
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.453 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.453 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.453 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.454 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.454 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.454 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.454 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.454 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.454 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.454 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.455 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.455 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.455 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.455 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.455 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.455 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.456 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.456 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.456 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.456 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.456 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.456 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.457 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.457 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.457 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.457 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.457 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.457 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.457 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.458 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.458 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.458 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.458 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.458 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.458 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.459 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.459 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.459 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.459 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.459 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.459 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.459 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.459 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.460 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.460 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.460 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.460 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.460 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.460 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.461 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.461 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.461 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.461 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.461 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.461 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.461 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.461 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.462 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.462 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.462 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.462 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.462 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.462 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.462 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.463 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.463 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.463 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.463 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.463 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.463 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.463 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.464 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.464 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.464 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.464 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.464 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.464 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.464 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.464 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.465 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.465 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.465 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.465 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.465 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.465 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.465 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.466 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.466 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.466 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.466 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.466 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.466 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.466 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.467 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.467 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.467 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.467 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.467 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.467 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.467 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.468 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.468 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.468 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.468 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.468 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.468 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.468 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.469 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.469 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.469 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.469 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.469 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.469 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.469 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.470 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.470 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.470 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.470 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.470 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.470 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.470 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.471 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.471 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.471 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.471 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.471 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.471 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.471 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.472 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.472 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.472 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.472 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.472 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.472 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.472 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.473 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.473 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.473 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.473 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.473 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.473 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.474 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.474 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.474 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.474 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.474 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.474 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.474 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.475 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.475 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.475 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.475 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.475 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.475 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.475 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.476 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.476 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.476 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.476 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.476 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.476 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.476 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.477 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.477 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.477 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.477 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.477 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.477 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.477 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.478 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.478 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.478 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.478 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.478 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.478 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.478 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.479 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.479 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.479 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.479 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.479 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.479 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.480 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.480 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.480 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.480 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.480 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.480 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.480 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.481 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.481 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.481 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.481 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.481 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.481 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.481 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.481 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.482 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.482 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.482 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.482 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.482 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.482 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.483 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.483 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.483 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.483 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.483 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.483 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.484 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.484 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.484 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.484 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.484 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.485 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.485 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.485 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.485 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.485 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.485 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.485 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.485 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.486 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.486 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.486 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.486 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.486 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.486 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.486 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.487 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.487 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.487 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.487 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.487 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.487 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.487 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.487 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.488 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.488 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.488 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.488 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.488 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.488 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.488 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.489 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.489 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.489 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.489 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.489 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.489 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.490 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.490 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.490 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.490 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.490 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.490 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.490 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.491 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.491 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.491 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.491 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.491 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.491 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.492 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.492 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.492 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.492 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.492 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.492 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.492 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.493 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.493 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.493 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.493 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.493 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.493 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.493 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.494 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.494 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.494 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.494 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.494 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.494 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.494 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.495 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.495 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.495 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.495 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.495 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.495 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.495 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.496 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.496 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.496 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.496 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.496 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.496 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.496 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.497 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.497 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.497 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.497 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.497 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.497 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.498 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.498 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.498 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.498 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.498 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.499 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.499 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.499 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.499 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.500 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.500 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.500 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.500 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.500 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.501 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.501 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.501 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.501 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.501 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.501 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.502 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.502 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.502 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.502 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.502 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.502 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.502 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.503 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.503 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.503 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.503 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.503 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.503 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.504 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.504 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.504 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.504 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.504 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.505 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.505 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.505 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.505 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.505 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.505 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.506 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.506 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.506 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.506 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.506 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.507 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.507 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.507 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.507 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.507 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.507 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.507 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.508 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.508 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.508 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.508 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.508 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.509 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.509 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.509 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.509 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.509 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.510 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.510 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.510 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.510 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.510 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.510 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.511 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.511 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.511 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.511 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.511 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.511 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.512 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.512 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.512 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.512 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.512 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.512 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.513 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.513 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.513 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.513 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.513 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.513 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.513 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.514 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.514 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.514 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.514 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.514 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.514 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.515 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.515 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.515 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.515 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.515 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.515 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.516 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.516 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.516 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.516 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.516 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.517 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.517 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.517 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.517 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.517 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.518 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.518 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.518 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.518 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.518 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.518 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.518 189613 DEBUG oslo_service.service [None req-f1280880-a16c-4777-823e-c6f9035d6436 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.520 189613 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.536 189613 INFO nova.virt.node [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Determined node identity 7680d048-14f1-46f8-a34d-a7eb32eb11df from /var/lib/nova/compute_id
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.537 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.537 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.537 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.538 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.552 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd655c63e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.554 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd655c63e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.555 189613 INFO nova.virt.libvirt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Connection event '1' reason 'None'
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.561 189613 INFO nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 21:55:03 compute-0 nova_compute[189608]: 
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <host>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <uuid>c15acc49-e00e-4e10-af5a-4da075840387</uuid>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <cpu>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <arch>x86_64</arch>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model>EPYC-Rome-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <vendor>AMD</vendor>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <microcode version='16777317'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <signature family='23' model='49' stepping='0'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='x2apic'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='tsc-deadline'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='osxsave'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='hypervisor'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='tsc_adjust'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='spec-ctrl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='stibp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='arch-capabilities'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='cmp_legacy'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='topoext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='virt-ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='lbrv'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='tsc-scale'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='vmcb-clean'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='pause-filter'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='pfthreshold'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='svme-addr-chk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='rdctl-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='skip-l1dfl-vmentry'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='mds-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature name='pschange-mc-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <pages unit='KiB' size='4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <pages unit='KiB' size='2048'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <pages unit='KiB' size='1048576'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </cpu>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <power_management>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <suspend_mem/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <suspend_disk/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <suspend_hybrid/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </power_management>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <iommu support='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <migration_features>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <live/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <uri_transports>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <uri_transport>tcp</uri_transport>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <uri_transport>rdma</uri_transport>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </uri_transports>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </migration_features>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <topology>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <cells num='1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <cell id='0'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:           <memory unit='KiB'>7864312</memory>
Nov 24 21:55:03 compute-0 nova_compute[189608]:           <pages unit='KiB' size='4'>1966078</pages>
Nov 24 21:55:03 compute-0 nova_compute[189608]:           <pages unit='KiB' size='2048'>0</pages>
Nov 24 21:55:03 compute-0 nova_compute[189608]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 24 21:55:03 compute-0 nova_compute[189608]:           <distances>
Nov 24 21:55:03 compute-0 nova_compute[189608]:             <sibling id='0' value='10'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:           </distances>
Nov 24 21:55:03 compute-0 nova_compute[189608]:           <cpus num='8'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:           </cpus>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         </cell>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </cells>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </topology>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <cache>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </cache>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <secmodel>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model>selinux</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <doi>0</doi>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </secmodel>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <secmodel>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model>dac</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <doi>0</doi>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </secmodel>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </host>
Nov 24 21:55:03 compute-0 nova_compute[189608]: 
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <guest>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <os_type>hvm</os_type>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <arch name='i686'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <wordsize>32</wordsize>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <domain type='qemu'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <domain type='kvm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </arch>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <features>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <pae/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <nonpae/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <acpi default='on' toggle='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <apic default='on' toggle='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <cpuselection/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <deviceboot/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <disksnapshot default='on' toggle='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <externalSnapshot/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </features>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </guest>
Nov 24 21:55:03 compute-0 nova_compute[189608]: 
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <guest>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <os_type>hvm</os_type>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <arch name='x86_64'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <wordsize>64</wordsize>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <domain type='qemu'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <domain type='kvm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </arch>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <features>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <acpi default='on' toggle='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <apic default='on' toggle='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <cpuselection/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <deviceboot/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <disksnapshot default='on' toggle='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <externalSnapshot/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </features>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </guest>
Nov 24 21:55:03 compute-0 nova_compute[189608]: 
Nov 24 21:55:03 compute-0 nova_compute[189608]: </capabilities>
Nov 24 21:55:03 compute-0 nova_compute[189608]: 
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.568 189613 DEBUG nova.virt.libvirt.volume.mount [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.573 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.579 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 21:55:03 compute-0 nova_compute[189608]: <domainCapabilities>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <domain>kvm</domain>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <arch>i686</arch>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <vcpu max='240'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <iothreads supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <os supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <enum name='firmware'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <loader supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>rom</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pflash</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='readonly'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>yes</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>no</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='secure'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>no</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </loader>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </os>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <cpu>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='host-passthrough' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='hostPassthroughMigratable'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>on</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>off</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='maximum' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='maximumMigratable'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>on</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>off</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='host-model' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <vendor>AMD</vendor>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='x2apic'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='hypervisor'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='stibp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='overflow-recov'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='succor'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='ibrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='lbrv'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='tsc-scale'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='flushbyasid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='pause-filter'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='pfthreshold'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='disable' name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='custom' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cooperlake'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cooperlake-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cooperlake-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Dhyana-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Genoa'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amd-psfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='auto-ibrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='no-nested-data-bp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='null-sel-clr-base'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='stibp-always-on'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amd-psfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='auto-ibrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='no-nested-data-bp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='null-sel-clr-base'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='stibp-always-on'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Milan'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Milan-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Milan-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amd-psfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='no-nested-data-bp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='null-sel-clr-base'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='stibp-always-on'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='GraniteRapids'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='prefetchiti'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='GraniteRapids-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='prefetchiti'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='GraniteRapids-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10-128'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10-256'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10-512'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='prefetchiti'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v6'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v7'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='KnightsMill'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4fmaps'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4vnniw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512er'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512pf'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='KnightsMill-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4fmaps'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4vnniw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512er'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512pf'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G4-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tbm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G5-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tbm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SierraForest'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ne-convert'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cmpccxadd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SierraForest-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ne-convert'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cmpccxadd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='athlon'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='athlon-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='core2duo'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='core2duo-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='coreduo'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='coreduo-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='n270'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='n270-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='phenom'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='phenom-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </cpu>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <memoryBacking supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <enum name='sourceType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>file</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>anonymous</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>memfd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </memoryBacking>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <devices>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <disk supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='diskDevice'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>disk</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>cdrom</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>floppy</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>lun</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='bus'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>ide</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>fdc</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>scsi</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>usb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>sata</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-non-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </disk>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <graphics supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vnc</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>egl-headless</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>dbus</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </graphics>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <video supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='modelType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vga</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>cirrus</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>none</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>bochs</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>ramfb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </video>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <hostdev supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='mode'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>subsystem</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='startupPolicy'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>default</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>mandatory</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>requisite</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>optional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='subsysType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>usb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pci</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>scsi</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='capsType'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='pciBackend'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </hostdev>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <rng supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-non-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendModel'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>random</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>egd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>builtin</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </rng>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <filesystem supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='driverType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>path</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>handle</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtiofs</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </filesystem>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <tpm supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tpm-tis</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tpm-crb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendModel'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>emulator</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>external</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendVersion'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>2.0</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </tpm>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <redirdev supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='bus'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>usb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </redirdev>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <channel supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pty</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>unix</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </channel>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <crypto supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>qemu</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendModel'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>builtin</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </crypto>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <interface supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>default</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>passt</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </interface>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <panic supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>isa</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>hyperv</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </panic>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <console supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>null</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vc</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pty</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>dev</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>file</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pipe</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>stdio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>udp</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tcp</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>unix</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>qemu-vdagent</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>dbus</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </console>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </devices>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <features>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <gic supported='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <vmcoreinfo supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <genid supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <backingStoreInput supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <backup supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <async-teardown supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <ps2 supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <sev supported='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <sgx supported='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <hyperv supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='features'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>relaxed</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vapic</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>spinlocks</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vpindex</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>runtime</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>synic</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>stimer</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>reset</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vendor_id</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>frequencies</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>reenlightenment</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tlbflush</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>ipi</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>avic</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>emsr_bitmap</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>xmm_input</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <defaults>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <spinlocks>4095</spinlocks>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <stimer_direct>on</stimer_direct>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </defaults>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </hyperv>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <launchSecurity supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='sectype'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tdx</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </launchSecurity>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </features>
Nov 24 21:55:03 compute-0 nova_compute[189608]: </domainCapabilities>
Nov 24 21:55:03 compute-0 nova_compute[189608]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.588 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 21:55:03 compute-0 nova_compute[189608]: <domainCapabilities>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <domain>kvm</domain>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <arch>i686</arch>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <vcpu max='4096'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <iothreads supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <os supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <enum name='firmware'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <loader supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>rom</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pflash</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='readonly'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>yes</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>no</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='secure'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>no</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </loader>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </os>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <cpu>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='host-passthrough' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='hostPassthroughMigratable'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>on</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>off</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='maximum' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='maximumMigratable'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>on</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>off</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='host-model' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <vendor>AMD</vendor>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='x2apic'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='hypervisor'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='stibp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='overflow-recov'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='succor'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='ibrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='lbrv'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='tsc-scale'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='flushbyasid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='pause-filter'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='pfthreshold'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='disable' name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='custom' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cooperlake'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cooperlake-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cooperlake-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Dhyana-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Genoa'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amd-psfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='auto-ibrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='no-nested-data-bp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='null-sel-clr-base'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='stibp-always-on'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amd-psfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='auto-ibrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='no-nested-data-bp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='null-sel-clr-base'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='stibp-always-on'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Milan'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Milan-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Milan-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amd-psfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='no-nested-data-bp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='null-sel-clr-base'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='stibp-always-on'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='GraniteRapids'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='prefetchiti'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='GraniteRapids-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='prefetchiti'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='GraniteRapids-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10-128'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10-256'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10-512'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='prefetchiti'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v6'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v7'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='KnightsMill'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4fmaps'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4vnniw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512er'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512pf'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='KnightsMill-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4fmaps'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4vnniw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512er'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512pf'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G4-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tbm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G5-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tbm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SierraForest'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ne-convert'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cmpccxadd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SierraForest-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ne-convert'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cmpccxadd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='athlon'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='athlon-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='core2duo'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='core2duo-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='coreduo'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='coreduo-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='n270'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='n270-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='phenom'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='phenom-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </cpu>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <memoryBacking supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <enum name='sourceType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>file</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>anonymous</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>memfd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </memoryBacking>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <devices>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <disk supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='diskDevice'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>disk</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>cdrom</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>floppy</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>lun</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='bus'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>fdc</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>scsi</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>usb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>sata</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-non-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </disk>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <graphics supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vnc</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>egl-headless</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>dbus</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </graphics>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <video supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='modelType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vga</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>cirrus</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>none</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>bochs</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>ramfb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </video>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <hostdev supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='mode'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>subsystem</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='startupPolicy'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>default</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>mandatory</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>requisite</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>optional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='subsysType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>usb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pci</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>scsi</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='capsType'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='pciBackend'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </hostdev>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <rng supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-non-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendModel'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>random</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>egd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>builtin</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </rng>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <filesystem supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='driverType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>path</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>handle</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtiofs</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </filesystem>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <tpm supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tpm-tis</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tpm-crb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendModel'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>emulator</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>external</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendVersion'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>2.0</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </tpm>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <redirdev supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='bus'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>usb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </redirdev>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <channel supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pty</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>unix</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </channel>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <crypto supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>qemu</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendModel'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>builtin</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </crypto>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <interface supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>default</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>passt</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </interface>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <panic supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>isa</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>hyperv</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </panic>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <console supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>null</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vc</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pty</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>dev</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>file</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pipe</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>stdio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>udp</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tcp</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>unix</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>qemu-vdagent</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>dbus</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </console>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </devices>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <features>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <gic supported='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <vmcoreinfo supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <genid supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <backingStoreInput supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <backup supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <async-teardown supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <ps2 supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <sev supported='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <sgx supported='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <hyperv supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='features'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>relaxed</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vapic</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>spinlocks</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vpindex</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>runtime</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>synic</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>stimer</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>reset</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vendor_id</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>frequencies</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>reenlightenment</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tlbflush</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>ipi</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>avic</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>emsr_bitmap</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>xmm_input</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <defaults>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <spinlocks>4095</spinlocks>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <stimer_direct>on</stimer_direct>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </defaults>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </hyperv>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <launchSecurity supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='sectype'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tdx</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </launchSecurity>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </features>
Nov 24 21:55:03 compute-0 nova_compute[189608]: </domainCapabilities>
Nov 24 21:55:03 compute-0 nova_compute[189608]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.638 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.642 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 24 21:55:03 compute-0 nova_compute[189608]: <domainCapabilities>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <domain>kvm</domain>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <arch>x86_64</arch>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <vcpu max='240'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <iothreads supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <os supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <enum name='firmware'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <loader supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>rom</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pflash</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='readonly'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>yes</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>no</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='secure'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>no</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </loader>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </os>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <cpu>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='host-passthrough' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='hostPassthroughMigratable'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>on</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>off</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='maximum' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='maximumMigratable'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>on</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>off</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='host-model' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <vendor>AMD</vendor>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='x2apic'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='hypervisor'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='stibp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='overflow-recov'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='succor'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='ibrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='lbrv'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='tsc-scale'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='flushbyasid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='pause-filter'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='pfthreshold'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='disable' name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='custom' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cooperlake'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cooperlake-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cooperlake-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Dhyana-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Genoa'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amd-psfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='auto-ibrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='no-nested-data-bp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='null-sel-clr-base'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='stibp-always-on'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amd-psfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='auto-ibrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='no-nested-data-bp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='null-sel-clr-base'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='stibp-always-on'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Milan'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Milan-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Milan-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amd-psfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='no-nested-data-bp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='null-sel-clr-base'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='stibp-always-on'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='GraniteRapids'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='prefetchiti'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='GraniteRapids-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='prefetchiti'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='GraniteRapids-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10-128'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10-256'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10-512'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='prefetchiti'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v6'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v7'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='KnightsMill'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4fmaps'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4vnniw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512er'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512pf'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='KnightsMill-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4fmaps'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4vnniw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512er'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512pf'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G4-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tbm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G5-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tbm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SierraForest'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ne-convert'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cmpccxadd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SierraForest-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ne-convert'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cmpccxadd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='athlon'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='athlon-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='core2duo'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='core2duo-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='coreduo'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='coreduo-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='n270'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='n270-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='phenom'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='phenom-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </cpu>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <memoryBacking supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <enum name='sourceType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>file</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>anonymous</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>memfd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </memoryBacking>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <devices>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <disk supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='diskDevice'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>disk</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>cdrom</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>floppy</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>lun</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='bus'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>ide</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>fdc</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>scsi</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>usb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>sata</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-non-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </disk>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <graphics supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vnc</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>egl-headless</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>dbus</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </graphics>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <video supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='modelType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vga</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>cirrus</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>none</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>bochs</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>ramfb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </video>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <hostdev supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='mode'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>subsystem</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='startupPolicy'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>default</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>mandatory</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>requisite</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>optional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='subsysType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>usb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pci</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>scsi</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='capsType'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='pciBackend'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </hostdev>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <rng supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-non-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendModel'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>random</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>egd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>builtin</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </rng>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <filesystem supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='driverType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>path</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>handle</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtiofs</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </filesystem>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <tpm supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tpm-tis</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tpm-crb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendModel'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>emulator</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>external</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendVersion'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>2.0</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </tpm>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <redirdev supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='bus'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>usb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </redirdev>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <channel supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pty</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>unix</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </channel>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <crypto supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>qemu</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendModel'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>builtin</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </crypto>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <interface supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>default</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>passt</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </interface>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <panic supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>isa</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>hyperv</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </panic>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <console supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>null</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vc</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pty</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>dev</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>file</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pipe</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>stdio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>udp</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tcp</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>unix</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>qemu-vdagent</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>dbus</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </console>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </devices>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <features>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <gic supported='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <vmcoreinfo supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <genid supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <backingStoreInput supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <backup supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <async-teardown supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <ps2 supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <sev supported='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <sgx supported='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <hyperv supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='features'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>relaxed</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vapic</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>spinlocks</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vpindex</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>runtime</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>synic</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>stimer</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>reset</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vendor_id</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>frequencies</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>reenlightenment</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tlbflush</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>ipi</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>avic</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>emsr_bitmap</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>xmm_input</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <defaults>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <spinlocks>4095</spinlocks>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <stimer_direct>on</stimer_direct>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </defaults>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </hyperv>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <launchSecurity supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='sectype'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tdx</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </launchSecurity>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </features>
Nov 24 21:55:03 compute-0 nova_compute[189608]: </domainCapabilities>
Nov 24 21:55:03 compute-0 nova_compute[189608]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.702 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 21:55:03 compute-0 nova_compute[189608]: <domainCapabilities>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <domain>kvm</domain>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <arch>x86_64</arch>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <vcpu max='4096'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <iothreads supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <os supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <enum name='firmware'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>efi</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <loader supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>rom</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pflash</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='readonly'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>yes</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>no</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='secure'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>yes</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>no</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </loader>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </os>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <cpu>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='host-passthrough' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='hostPassthroughMigratable'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>on</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>off</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='maximum' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='maximumMigratable'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>on</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>off</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='host-model' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <vendor>AMD</vendor>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='x2apic'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='hypervisor'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='stibp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='overflow-recov'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='succor'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='ibrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='lbrv'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='tsc-scale'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='flushbyasid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='pause-filter'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='pfthreshold'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <feature policy='disable' name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <mode name='custom' supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Broadwell-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cooperlake'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cooperlake-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Cooperlake-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Denverton-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Dhyana-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Genoa'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amd-psfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='auto-ibrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='no-nested-data-bp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='null-sel-clr-base'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='stibp-always-on'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amd-psfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='auto-ibrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='no-nested-data-bp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='null-sel-clr-base'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='stibp-always-on'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Milan'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Milan-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Milan-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amd-psfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='no-nested-data-bp'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='null-sel-clr-base'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='stibp-always-on'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-Rome-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='EPYC-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='GraniteRapids'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='prefetchiti'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='GraniteRapids-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='prefetchiti'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='GraniteRapids-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10-128'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10-256'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx10-512'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='prefetchiti'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Haswell-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v6'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Icelake-Server-v7'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='IvyBridge-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='KnightsMill'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4fmaps'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4vnniw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512er'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512pf'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='KnightsMill-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4fmaps'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-4vnniw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512er'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512pf'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G4-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tbm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Opteron_G5-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fma4'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tbm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xop'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SapphireRapids-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='amx-tile'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-bf16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-fp16'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512-vpopcntdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bitalg'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vbmi2'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrc'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fzrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='la57'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='taa-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='tsx-ldtrk'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xfd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SierraForest'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ne-convert'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cmpccxadd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='SierraForest-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ifma'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-ne-convert'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx-vnni-int8'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='bus-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cmpccxadd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fbsdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='fsrs'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ibrs-all'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mcdt-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pbrsb-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='psdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='sbdr-ssdp-no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='serialize'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vaes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='vpclmulqdq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Client-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='hle'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='rtm'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Skylake-Server-v5'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512bw'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512cd'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512dq'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512f'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='avx512vl'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='invpcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pcid'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='pku'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='mpx'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v2'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v3'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='core-capability'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='split-lock-detect'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='Snowridge-v4'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='cldemote'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='erms'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='gfni'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdir64b'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='movdiri'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='xsaves'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='athlon'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='athlon-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='core2duo'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='core2duo-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='coreduo'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='coreduo-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='n270'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='n270-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='ss'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='phenom'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <blockers model='phenom-v1'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnow'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <feature name='3dnowext'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </blockers>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </mode>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </cpu>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <memoryBacking supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <enum name='sourceType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>file</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>anonymous</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <value>memfd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </memoryBacking>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <devices>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <disk supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='diskDevice'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>disk</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>cdrom</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>floppy</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>lun</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='bus'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>fdc</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>scsi</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>usb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>sata</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-non-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </disk>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <graphics supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vnc</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>egl-headless</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>dbus</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </graphics>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <video supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='modelType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vga</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>cirrus</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>none</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>bochs</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>ramfb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </video>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <hostdev supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='mode'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>subsystem</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='startupPolicy'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>default</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>mandatory</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>requisite</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>optional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='subsysType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>usb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pci</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>scsi</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='capsType'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='pciBackend'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </hostdev>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <rng supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtio-non-transitional</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendModel'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>random</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>egd</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>builtin</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </rng>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <filesystem supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='driverType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>path</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>handle</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>virtiofs</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </filesystem>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <tpm supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tpm-tis</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tpm-crb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendModel'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>emulator</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>external</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendVersion'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>2.0</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </tpm>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <redirdev supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='bus'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>usb</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </redirdev>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <channel supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pty</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>unix</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </channel>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <crypto supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>qemu</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendModel'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>builtin</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </crypto>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <interface supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='backendType'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>default</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>passt</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </interface>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <panic supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='model'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>isa</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>hyperv</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </panic>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <console supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='type'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>null</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vc</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pty</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>dev</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>file</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>pipe</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>stdio</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>udp</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tcp</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>unix</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>qemu-vdagent</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>dbus</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </console>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </devices>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   <features>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <gic supported='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <vmcoreinfo supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <genid supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <backingStoreInput supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <backup supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <async-teardown supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <ps2 supported='yes'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <sev supported='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <sgx supported='no'/>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <hyperv supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='features'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>relaxed</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vapic</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>spinlocks</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vpindex</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>runtime</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>synic</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>stimer</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>reset</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>vendor_id</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>frequencies</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>reenlightenment</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tlbflush</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>ipi</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>avic</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>emsr_bitmap</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>xmm_input</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <defaults>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <spinlocks>4095</spinlocks>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <stimer_direct>on</stimer_direct>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </defaults>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </hyperv>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     <launchSecurity supported='yes'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       <enum name='sectype'>
Nov 24 21:55:03 compute-0 nova_compute[189608]:         <value>tdx</value>
Nov 24 21:55:03 compute-0 nova_compute[189608]:       </enum>
Nov 24 21:55:03 compute-0 nova_compute[189608]:     </launchSecurity>
Nov 24 21:55:03 compute-0 nova_compute[189608]:   </features>
Nov 24 21:55:03 compute-0 nova_compute[189608]: </domainCapabilities>
Nov 24 21:55:03 compute-0 nova_compute[189608]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.760 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.761 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.761 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.761 189613 INFO nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Secure Boot support detected
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.763 189613 INFO nova.virt.libvirt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.764 189613 INFO nova.virt.libvirt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.775 189613 DEBUG nova.virt.libvirt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.798 189613 INFO nova.virt.node [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Determined node identity 7680d048-14f1-46f8-a34d-a7eb32eb11df from /var/lib/nova/compute_id
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.816 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Verified node 7680d048-14f1-46f8-a34d-a7eb32eb11df matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.834 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.968 189613 DEBUG oslo_concurrency.lockutils [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.968 189613 DEBUG oslo_concurrency.lockutils [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.968 189613 DEBUG oslo_concurrency.lockutils [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:55:03 compute-0 nova_compute[189608]: 2025-11-24 21:55:03.969 189613 DEBUG nova.compute.resource_tracker [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.122 189613 WARNING nova.virt.libvirt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.123 189613 DEBUG nova.compute.resource_tracker [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6027MB free_disk=72.43153381347656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.124 189613 DEBUG oslo_concurrency.lockutils [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.124 189613 DEBUG oslo_concurrency.lockutils [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.293 189613 DEBUG nova.compute.resource_tracker [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.294 189613 DEBUG nova.compute.resource_tracker [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.363 189613 DEBUG nova.scheduler.client.report [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Refreshing inventories for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.379 189613 DEBUG nova.scheduler.client.report [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Updating ProviderTree inventory for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df from _refresh_and_get_inventory using data: {} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.379 189613 DEBUG nova.compute.provider_tree [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.400 189613 DEBUG nova.scheduler.client.report [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Refreshing aggregate associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.431 189613 DEBUG nova.scheduler.client.report [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Refreshing trait associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, traits: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.466 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 24 21:55:04 compute-0 nova_compute[189608]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.467 189613 INFO nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] kernel doesn't support AMD SEV
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.467 189613 DEBUG nova.compute.provider_tree [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Updating inventory in ProviderTree for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.468 189613 DEBUG nova.virt.libvirt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.527 189613 DEBUG nova.scheduler.client.report [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Updated inventory for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.528 189613 DEBUG nova.compute.provider_tree [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Updating resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.528 189613 DEBUG nova.compute.provider_tree [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Updating inventory in ProviderTree for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.638 189613 DEBUG nova.compute.provider_tree [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Updating resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.663 189613 DEBUG nova.compute.resource_tracker [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.663 189613 DEBUG oslo_concurrency.lockutils [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.664 189613 DEBUG nova.service [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.714 189613 DEBUG nova.service [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 24 21:55:04 compute-0 nova_compute[189608]: 2025-11-24 21:55:04.715 189613 DEBUG nova.servicegroup.drivers.db [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 24 21:55:08 compute-0 sshd-session[189911]: Accepted publickey for zuul from 192.168.122.30 port 39532 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:55:08 compute-0 systemd-logind[806]: New session 26 of user zuul.
Nov 24 21:55:08 compute-0 systemd[1]: Started Session 26 of User zuul.
Nov 24 21:55:08 compute-0 sshd-session[189911]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:55:09 compute-0 python3.9[190064]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 21:55:10 compute-0 sudo[190218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcqjdatwczrqufatvccotcdagqszzvkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021309.8572235-36-8006344963917/AnsiballZ_systemd_service.py'
Nov 24 21:55:10 compute-0 sudo[190218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:10 compute-0 python3.9[190220]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:55:10 compute-0 systemd[1]: Reloading.
Nov 24 21:55:10 compute-0 systemd-rc-local-generator[190249]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:55:10 compute-0 systemd-sysv-generator[190252]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:55:11 compute-0 sudo[190218]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:11 compute-0 nova_compute[189608]: 2025-11-24 21:55:11.718 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:55:11 compute-0 nova_compute[189608]: 2025-11-24 21:55:11.759 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:55:12 compute-0 python3.9[190406]: ansible-ansible.builtin.service_facts Invoked
Nov 24 21:55:12 compute-0 network[190423]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 21:55:12 compute-0 network[190424]: 'network-scripts' will be removed from distribution in near future.
Nov 24 21:55:12 compute-0 network[190425]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 21:55:14 compute-0 podman[190475]: 2025-11-24 21:55:14.292698592 +0000 UTC m=+0.101272009 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 24 21:55:16 compute-0 sudo[190716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhanatnprhwdtwpinkhgalphgpgluvqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021316.5759134-55-255314122364436/AnsiballZ_systemd_service.py'
Nov 24 21:55:16 compute-0 sudo[190716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:17 compute-0 python3.9[190718]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:55:17 compute-0 sudo[190716]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:18 compute-0 sudo[190869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcdlavgudehcwgrynqryzzlqggivflqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021317.7489102-65-220763366700535/AnsiballZ_file.py'
Nov 24 21:55:18 compute-0 sudo[190869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:18 compute-0 python3.9[190871]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:18 compute-0 sudo[190869]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:18 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:55:18 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:55:18 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:55:19 compute-0 sudo[191022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzobkffycoxyezgdafucgibzxiwymgta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021318.7985053-73-184817511239095/AnsiballZ_file.py'
Nov 24 21:55:19 compute-0 sudo[191022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:19 compute-0 python3.9[191024]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:19 compute-0 sudo[191022]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:20 compute-0 sudo[191174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfwpfzjtbnehhhlbuhboakbbzafbyzwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021319.623257-82-268275860419707/AnsiballZ_command.py'
Nov 24 21:55:20 compute-0 sudo[191174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:20 compute-0 python3.9[191176]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:55:20 compute-0 sudo[191174]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:21 compute-0 python3.9[191328]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 21:55:22 compute-0 sudo[191478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onorkxoywpptayhhmcgjsdaxyxctkkjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021321.6493998-100-279717474487533/AnsiballZ_systemd_service.py'
Nov 24 21:55:22 compute-0 sudo[191478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:22 compute-0 python3.9[191480]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:55:22 compute-0 systemd[1]: Reloading.
Nov 24 21:55:22 compute-0 systemd-rc-local-generator[191508]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:55:22 compute-0 systemd-sysv-generator[191511]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:55:22 compute-0 sudo[191478]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:23 compute-0 sudo[191665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nchasiukgqjjxgcurrruxdienuyrtzwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021322.9689019-108-177576816444661/AnsiballZ_command.py'
Nov 24 21:55:23 compute-0 sudo[191665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:23 compute-0 python3.9[191667]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:55:23 compute-0 sudo[191665]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:24 compute-0 sudo[191818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymokpkkqeahxqxuhwppwyuwbrlovuqms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021323.8694894-117-198299567757810/AnsiballZ_file.py'
Nov 24 21:55:24 compute-0 sudo[191818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:24 compute-0 python3.9[191820]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:55:24 compute-0 sudo[191818]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:25 compute-0 python3.9[191970]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:55:26 compute-0 python3.9[192122]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:26 compute-0 python3.9[192243]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021325.669652-133-49549390227169/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:55:27 compute-0 sudo[192393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyuloagpamsqxqqbcwzwagabdhouicir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021327.2068155-148-71432181518466/AnsiballZ_group.py'
Nov 24 21:55:27 compute-0 sudo[192393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:27 compute-0 python3.9[192395]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Nov 24 21:55:27 compute-0 sudo[192393]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:28 compute-0 sudo[192545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdbqjaftbgzslghhsmvgcefweoudckjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021328.409042-159-130366279980123/AnsiballZ_getent.py'
Nov 24 21:55:28 compute-0 sudo[192545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:29 compute-0 python3.9[192547]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 24 21:55:29 compute-0 sudo[192545]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:29 compute-0 sudo[192698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzddtjhwaarqrauedndepexjodknasro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021329.3522081-167-235085907549944/AnsiballZ_group.py'
Nov 24 21:55:29 compute-0 sudo[192698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:29 compute-0 python3.9[192700]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 21:55:29 compute-0 groupadd[192701]: group added to /etc/group: name=ceilometer, GID=42405
Nov 24 21:55:29 compute-0 groupadd[192701]: group added to /etc/gshadow: name=ceilometer
Nov 24 21:55:29 compute-0 groupadd[192701]: new group: name=ceilometer, GID=42405
Nov 24 21:55:29 compute-0 sudo[192698]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:30 compute-0 podman[192783]: 2025-11-24 21:55:30.548420764 +0000 UTC m=+0.106211922 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:55:30 compute-0 podman[192829]: 2025-11-24 21:55:30.638796067 +0000 UTC m=+0.061701282 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 24 21:55:30 compute-0 sudo[192901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jquyevrgzguxriisbkkxzjdnpdsqyhmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021330.2142353-175-201898245562173/AnsiballZ_user.py'
Nov 24 21:55:30 compute-0 sudo[192901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:30 compute-0 python3.9[192903]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 21:55:31 compute-0 useradd[192905]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 21:55:31 compute-0 useradd[192905]: add 'ceilometer' to group 'libvirt'
Nov 24 21:55:31 compute-0 useradd[192905]: add 'ceilometer' to shadow group 'libvirt'
Nov 24 21:55:31 compute-0 sudo[192901]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:32 compute-0 python3.9[193061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:33 compute-0 python3.9[193182]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764021331.8842442-201-89545788771539/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:33 compute-0 python3.9[193332]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:34 compute-0 python3.9[193453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764021333.2327013-201-73500756336279/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:35 compute-0 python3.9[193603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:35 compute-0 python3.9[193724]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764021334.6427858-201-74779840939130/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:36 compute-0 python3.9[193874]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:55:38 compute-0 python3.9[194026]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:55:39 compute-0 python3.9[194178]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:39 compute-0 python3.9[194299]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021338.6496947-260-271325026319836/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:40 compute-0 python3.9[194449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:41 compute-0 python3.9[194525]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:41 compute-0 python3.9[194675]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:42 compute-0 python3.9[194796]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021341.3006053-260-57800408004703/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:43 compute-0 python3.9[194946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:44 compute-0 python3.9[195067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021342.8412213-260-98695150322626/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:44 compute-0 podman[195151]: 2025-11-24 21:55:44.5385031 +0000 UTC m=+0.090278371 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 21:55:44 compute-0 python3.9[195236]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:45 compute-0 python3.9[195357]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021344.249824-260-67115927535206/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:46 compute-0 python3.9[195507]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:46 compute-0 python3.9[195628]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021345.689611-260-231382107295953/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:47 compute-0 python3.9[195778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:48 compute-0 python3.9[195899]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021347.1768162-260-137693610881587/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:49 compute-0 python3.9[196049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:49 compute-0 python3.9[196170]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021348.5743992-260-70289188908716/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:50 compute-0 python3.9[196320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:51 compute-0 python3.9[196441]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021350.0878725-260-177221767958477/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:52 compute-0 python3.9[196591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:52 compute-0 python3.9[196712]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021351.50553-260-242524425875715/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:53 compute-0 python3.9[196862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:54 compute-0 python3.9[196983]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021353.1054296-260-104293915343173/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:55:54.545 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:55:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:55:54.546 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:55:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:55:54.546 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:55:55 compute-0 python3.9[197133]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:55 compute-0 python3.9[197209]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:56 compute-0 python3.9[197359]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:56 compute-0 python3.9[197435]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:57 compute-0 python3.9[197585]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:55:58 compute-0 python3.9[197661]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:58 compute-0 sudo[197811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgjurbljofojiiriordrjhjidgmymvox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021358.4644823-449-175097919392359/AnsiballZ_file.py'
Nov 24 21:55:58 compute-0 sudo[197811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:59 compute-0 python3.9[197813]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:59 compute-0 sudo[197811]: pam_unix(sudo:session): session closed for user root
Nov 24 21:55:59 compute-0 sudo[197963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sphkbkvrzelayhcdthzenvryeswlyumt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021359.3133583-457-210855884100949/AnsiballZ_file.py'
Nov 24 21:55:59 compute-0 sudo[197963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:55:59 compute-0 python3.9[197965]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:55:59 compute-0 sudo[197963]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:00 compute-0 sudo[198115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptywspbtvdqxzpqwywsspjgqebliudsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021360.1832922-465-221959645358733/AnsiballZ_file.py'
Nov 24 21:56:00 compute-0 sudo[198115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:00 compute-0 podman[198117]: 2025-11-24 21:56:00.764316925 +0000 UTC m=+0.123603197 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 24 21:56:00 compute-0 python3.9[198118]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:56:00 compute-0 sudo[198115]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:00 compute-0 podman[198144]: 2025-11-24 21:56:00.85623128 +0000 UTC m=+0.068142740 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 21:56:01 compute-0 sudo[198313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-somdyoqikbwklukpbkdcraioziaiinfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021361.027812-473-190255311958878/AnsiballZ_systemd_service.py'
Nov 24 21:56:01 compute-0 sudo[198313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:01 compute-0 python3.9[198315]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:56:01 compute-0 systemd[1]: Reloading.
Nov 24 21:56:01 compute-0 systemd-rc-local-generator[198344]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:56:01 compute-0 systemd-sysv-generator[198349]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:56:02 compute-0 systemd[1]: Listening on Podman API Socket.
Nov 24 21:56:02 compute-0 sudo[198313]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.796 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.798 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.798 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.799 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.816 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.816 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.817 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.818 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.819 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.819 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.819 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.819 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.820 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.849 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.849 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.850 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:56:02 compute-0 nova_compute[189608]: 2025-11-24 21:56:02.850 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 21:56:02 compute-0 sudo[198505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gclldnsbeqvidsrwwflbyghqthnlcjwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021362.5726128-482-28243680189301/AnsiballZ_stat.py'
Nov 24 21:56:02 compute-0 sudo[198505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:03 compute-0 nova_compute[189608]: 2025-11-24 21:56:03.033 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 21:56:03 compute-0 nova_compute[189608]: 2025-11-24 21:56:03.034 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6012MB free_disk=72.43147277832031GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 21:56:03 compute-0 nova_compute[189608]: 2025-11-24 21:56:03.034 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:56:03 compute-0 nova_compute[189608]: 2025-11-24 21:56:03.034 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:56:03 compute-0 nova_compute[189608]: 2025-11-24 21:56:03.112 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 21:56:03 compute-0 nova_compute[189608]: 2025-11-24 21:56:03.112 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 21:56:03 compute-0 nova_compute[189608]: 2025-11-24 21:56:03.131 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 21:56:03 compute-0 python3.9[198507]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:56:03 compute-0 nova_compute[189608]: 2025-11-24 21:56:03.146 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 21:56:03 compute-0 nova_compute[189608]: 2025-11-24 21:56:03.148 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 21:56:03 compute-0 nova_compute[189608]: 2025-11-24 21:56:03.148 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:56:03 compute-0 sudo[198505]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:03 compute-0 sudo[198628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvdhducwcajoqgksseucvunhmuhnwrgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021362.5726128-482-28243680189301/AnsiballZ_copy.py'
Nov 24 21:56:03 compute-0 sudo[198628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:03 compute-0 python3.9[198630]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021362.5726128-482-28243680189301/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:56:03 compute-0 sudo[198628]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:04 compute-0 sudo[198704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibwvtxlyzfuojwcqweltwjxlqqdpstrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021362.5726128-482-28243680189301/AnsiballZ_stat.py'
Nov 24 21:56:04 compute-0 sudo[198704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:04 compute-0 python3.9[198706]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:56:04 compute-0 sudo[198704]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:04 compute-0 sudo[198827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etkgiogmjlrcodvecwabprhbbmhbpquj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021362.5726128-482-28243680189301/AnsiballZ_copy.py'
Nov 24 21:56:04 compute-0 sudo[198827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:04 compute-0 python3.9[198829]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021362.5726128-482-28243680189301/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:56:04 compute-0 sudo[198827]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:05 compute-0 sudo[198979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xftqspffomqbcqmuewopasvevmipwlih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021365.3540819-510-249341765676473/AnsiballZ_container_config_data.py'
Nov 24 21:56:05 compute-0 sudo[198979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:06 compute-0 python3.9[198981]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Nov 24 21:56:06 compute-0 sudo[198979]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:07 compute-0 sudo[199131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeyomwgsycigynysnkjuxrurnxdwgqou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021366.5217752-519-104142822480605/AnsiballZ_container_config_hash.py'
Nov 24 21:56:07 compute-0 sudo[199131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:07 compute-0 python3.9[199133]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 21:56:07 compute-0 sudo[199131]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:08 compute-0 sudo[199283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdifjrgqxyllvrzooaxsjftpymsjkhnb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021367.7409308-529-280386619263017/AnsiballZ_edpm_container_manage.py'
Nov 24 21:56:08 compute-0 sudo[199283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:08 compute-0 python3[199285]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 21:56:09 compute-0 podman[199321]: 2025-11-24 21:56:09.006740594 +0000 UTC m=+0.084739234 container create a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2)
Nov 24 21:56:09 compute-0 podman[199321]: 2025-11-24 21:56:08.96620916 +0000 UTC m=+0.044207790 image pull 62d0cdbd80511c7b16dc1b12830c26126f29d8961a194546e50bdb4d0a16aab7 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 24 21:56:09 compute-0 python3[199285]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Nov 24 21:56:09 compute-0 sudo[199283]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:09 compute-0 sudo[199509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsnyczamzbiohoyvbfuqcqozmsuiknoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021369.5464723-537-211516294145281/AnsiballZ_stat.py'
Nov 24 21:56:09 compute-0 sudo[199509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:10 compute-0 python3.9[199511]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:56:10 compute-0 sudo[199509]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:10 compute-0 auditd[705]: Audit daemon rotating log files
Nov 24 21:56:10 compute-0 sudo[199663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atbvvrycmevrjunfxpgaieeqwrztqnhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021370.5166495-546-230346793989845/AnsiballZ_file.py'
Nov 24 21:56:10 compute-0 sudo[199663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:11 compute-0 python3.9[199665]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:56:11 compute-0 sudo[199663]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:11 compute-0 sudo[199814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbilhuhtkpjkdajflhcmfbpsvqlwljhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021371.2448692-546-176816774319439/AnsiballZ_copy.py'
Nov 24 21:56:11 compute-0 sudo[199814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:12 compute-0 python3.9[199816]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764021371.2448692-546-176816774319439/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:56:12 compute-0 sudo[199814]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:12 compute-0 sudo[199890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nihnjrirpofqicfdpeabjsqzjtallbpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021371.2448692-546-176816774319439/AnsiballZ_systemd.py'
Nov 24 21:56:12 compute-0 sudo[199890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:13 compute-0 python3.9[199892]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:56:13 compute-0 systemd[1]: Reloading.
Nov 24 21:56:13 compute-0 systemd-rc-local-generator[199920]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:56:13 compute-0 systemd-sysv-generator[199924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:56:13 compute-0 sudo[199890]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:13 compute-0 sudo[200001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmjzbmogtzvevtvehhpbdpngscjjcplk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021371.2448692-546-176816774319439/AnsiballZ_systemd.py'
Nov 24 21:56:13 compute-0 sudo[200001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:14 compute-0 python3.9[200003]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:56:14 compute-0 systemd[1]: Reloading.
Nov 24 21:56:14 compute-0 systemd-rc-local-generator[200033]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:56:14 compute-0 systemd-sysv-generator[200036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:56:14 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Nov 24 21:56:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ebf8b47c594ccd829688ced526f3cf647a40e44615db62d140b08a1890b73e/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ebf8b47c594ccd829688ced526f3cf647a40e44615db62d140b08a1890b73e/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ebf8b47c594ccd829688ced526f3cf647a40e44615db62d140b08a1890b73e/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ebf8b47c594ccd829688ced526f3cf647a40e44615db62d140b08a1890b73e/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:14 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d.
Nov 24 21:56:14 compute-0 podman[200043]: 2025-11-24 21:56:14.625773033 +0000 UTC m=+0.159988533 container init a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: + sudo -E kolla_set_configs
Nov 24 21:56:14 compute-0 podman[200043]: 2025-11-24 21:56:14.663454539 +0000 UTC m=+0.197670019 container start a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm)
Nov 24 21:56:14 compute-0 sudo[200077]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: sudo: unable to send audit message: Operation not permitted
Nov 24 21:56:14 compute-0 sudo[200077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 24 21:56:14 compute-0 podman[200043]: ceilometer_agent_compute
Nov 24 21:56:14 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Nov 24 21:56:14 compute-0 podman[200063]: 2025-11-24 21:56:14.705930333 +0000 UTC m=+0.113504873 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 24 21:56:14 compute-0 sudo[200001]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Validating config file
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Copying service configuration files
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: INFO:__main__:Writing out command to execute
Nov 24 21:56:14 compute-0 sudo[200077]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: ++ cat /run_command
Nov 24 21:56:14 compute-0 podman[200078]: 2025-11-24 21:56:14.756730685 +0000 UTC m=+0.077070275 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: + ARGS=
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: + sudo kolla_copy_cacerts
Nov 24 21:56:14 compute-0 systemd[1]: a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d-1376b789dd1abcc8.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 21:56:14 compute-0 systemd[1]: a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d-1376b789dd1abcc8.service: Failed with result 'exit-code'.
Nov 24 21:56:14 compute-0 sudo[200111]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: sudo: unable to send audit message: Operation not permitted
Nov 24 21:56:14 compute-0 sudo[200111]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 24 21:56:14 compute-0 sudo[200111]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: + [[ ! -n '' ]]
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: + . kolla_extend_start
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: + umask 0022
Nov 24 21:56:14 compute-0 ceilometer_agent_compute[200060]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 24 21:56:15 compute-0 sudo[200259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzihqzfyvfbxpnqzwabowheoupsaparl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021374.9631739-570-177540816528800/AnsiballZ_systemd.py'
Nov 24 21:56:15 compute-0 sudo[200259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.534 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.534 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.534 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.535 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.535 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.535 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.535 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.535 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.535 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.535 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.535 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.536 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.536 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.536 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.536 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.536 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.536 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.536 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.536 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.536 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.536 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.537 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.537 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.537 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.537 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.537 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.537 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.537 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.537 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.537 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.538 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.538 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.538 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.538 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.538 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.538 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.538 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.538 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.538 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.538 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.538 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.538 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.539 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.539 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.539 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.539 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.539 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.539 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.539 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.539 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.539 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.539 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.540 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.540 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.540 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.540 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.540 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.540 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.540 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.540 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.540 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.540 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.541 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.541 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.541 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.541 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.541 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.541 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.541 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.541 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.541 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.541 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.541 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.542 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.542 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.542 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.542 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.542 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.542 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.542 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.542 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.542 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.542 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.543 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.543 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.543 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.543 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.543 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.543 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.543 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.543 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.543 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.544 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.544 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.544 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.544 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.544 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.544 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.544 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.544 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.544 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.544 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.545 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.545 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.545 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.545 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.545 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.545 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.545 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.545 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.545 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.545 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.546 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.546 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.546 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.546 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.546 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.546 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.546 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.546 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.546 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.546 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.547 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.547 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.547 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.547 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.547 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.547 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.547 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.547 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.547 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.547 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.547 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.547 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.548 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.548 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.548 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.548 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.548 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.548 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.548 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.548 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.548 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.548 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.549 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.549 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.549 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.549 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.549 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.549 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.549 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.549 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.549 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.573 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.573 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.574 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.574 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.574 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.574 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.574 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.574 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.575 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.575 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.575 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.575 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.575 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.575 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.576 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.576 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.576 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.576 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.576 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.576 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.576 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.576 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.577 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.577 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.577 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.577 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.577 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.577 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.577 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.577 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.577 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.578 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.578 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.578 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.578 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.578 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.578 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.578 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.578 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.579 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.579 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.579 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.579 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.579 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.579 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.580 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.580 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.580 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.580 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.580 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.580 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.580 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.581 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.581 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.581 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.581 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.581 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.581 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.581 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.582 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.582 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.582 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.582 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.582 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.582 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.582 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.583 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.583 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.583 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.583 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.583 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.583 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.584 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.584 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.584 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.584 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.584 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.584 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.584 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.585 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.585 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.585 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.585 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.585 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.585 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.586 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.586 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.586 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.586 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.586 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.586 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.586 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.586 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.587 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.587 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.587 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.587 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.587 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.587 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.587 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.588 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.588 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.588 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.588 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.588 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.588 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.588 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.589 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.589 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.589 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.589 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.589 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.589 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.590 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.590 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.590 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.590 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.590 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.590 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.591 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.591 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.591 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.591 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.591 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.591 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.591 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.592 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.592 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.592 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.592 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.592 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.592 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.592 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.592 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.593 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.593 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.593 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.593 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.593 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.593 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.593 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.593 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.593 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.594 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.594 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.595 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.595 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.595 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.598 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.600 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.601 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 24 21:56:15 compute-0 python3.9[200261]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:56:15 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.802 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.810 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.810 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.810 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.816 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.917 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.918 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.918 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.927 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.927 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.927 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.927 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.927 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.927 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.927 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.927 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.928 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.928 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.928 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.928 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.928 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.928 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.928 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.928 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.928 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.928 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.928 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.929 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.929 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.929 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.929 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.929 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.929 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.929 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.929 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.929 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.929 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.929 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.929 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.929 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.930 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.931 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.931 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.931 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.931 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.931 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.931 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.931 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.931 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.931 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.931 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.931 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.931 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.932 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.932 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.932 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.932 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.932 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.932 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.932 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.932 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.932 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.932 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.932 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.932 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.932 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.933 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.933 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.933 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.933 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.933 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.933 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.933 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.933 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.933 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.933 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.933 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.933 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.933 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.934 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.935 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.935 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.935 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.935 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.935 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.935 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.935 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.935 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.935 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.935 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.935 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.935 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.935 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.936 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.937 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.938 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.938 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.938 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.938 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.938 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.938 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.938 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.938 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.938 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.938 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.938 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.938 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.938 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.939 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.939 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.939 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.939 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.939 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.939 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.939 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.939 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.939 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.940 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Nov 24 21:56:15 compute-0 ceilometer_agent_compute[200060]: 2025-11-24 21:56:15.949 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Nov 24 21:56:15 compute-0 virtqemud[189136]: End of file while reading data: Input/output error
Nov 24 21:56:16 compute-0 systemd[1]: libpod-a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d.scope: Deactivated successfully.
Nov 24 21:56:16 compute-0 systemd[1]: libpod-a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d.scope: Consumed 1.502s CPU time.
Nov 24 21:56:16 compute-0 podman[200273]: 2025-11-24 21:56:16.124133835 +0000 UTC m=+0.372369415 container died a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 24 21:56:16 compute-0 systemd[1]: a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d-1376b789dd1abcc8.timer: Deactivated successfully.
Nov 24 21:56:16 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d.
Nov 24 21:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d-userdata-shm.mount: Deactivated successfully.
Nov 24 21:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-73ebf8b47c594ccd829688ced526f3cf647a40e44615db62d140b08a1890b73e-merged.mount: Deactivated successfully.
Nov 24 21:56:16 compute-0 podman[200273]: 2025-11-24 21:56:16.185671739 +0000 UTC m=+0.433907299 container cleanup a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true)
Nov 24 21:56:16 compute-0 podman[200273]: ceilometer_agent_compute
Nov 24 21:56:16 compute-0 podman[200306]: ceilometer_agent_compute
Nov 24 21:56:16 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Nov 24 21:56:16 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Nov 24 21:56:16 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Nov 24 21:56:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ebf8b47c594ccd829688ced526f3cf647a40e44615db62d140b08a1890b73e/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ebf8b47c594ccd829688ced526f3cf647a40e44615db62d140b08a1890b73e/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ebf8b47c594ccd829688ced526f3cf647a40e44615db62d140b08a1890b73e/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ebf8b47c594ccd829688ced526f3cf647a40e44615db62d140b08a1890b73e/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:16 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d.
Nov 24 21:56:16 compute-0 podman[200317]: 2025-11-24 21:56:16.468245224 +0000 UTC m=+0.158500745 container init a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: + sudo -E kolla_set_configs
Nov 24 21:56:16 compute-0 sudo[200339]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: sudo: unable to send audit message: Operation not permitted
Nov 24 21:56:16 compute-0 sudo[200339]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 24 21:56:16 compute-0 podman[200317]: 2025-11-24 21:56:16.513892257 +0000 UTC m=+0.204147778 container start a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 21:56:16 compute-0 podman[200317]: ceilometer_agent_compute
Nov 24 21:56:16 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Nov 24 21:56:16 compute-0 sudo[200259]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Validating config file
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Copying service configuration files
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: INFO:__main__:Writing out command to execute
Nov 24 21:56:16 compute-0 sudo[200339]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: ++ cat /run_command
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: + ARGS=
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: + sudo kolla_copy_cacerts
Nov 24 21:56:16 compute-0 podman[200340]: 2025-11-24 21:56:16.605570055 +0000 UTC m=+0.075069855 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 21:56:16 compute-0 sudo[200365]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: sudo: unable to send audit message: Operation not permitted
Nov 24 21:56:16 compute-0 systemd[1]: a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d-5f631cf2ae6d88a4.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 21:56:16 compute-0 systemd[1]: a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d-5f631cf2ae6d88a4.service: Failed with result 'exit-code'.
Nov 24 21:56:16 compute-0 sudo[200365]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 24 21:56:16 compute-0 sudo[200365]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: + [[ ! -n '' ]]
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: + . kolla_extend_start
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: + umask 0022
Nov 24 21:56:16 compute-0 ceilometer_agent_compute[200333]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 24 21:56:17 compute-0 sudo[200514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxxroiraidvvnddbnhshgqzlcdbdslen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021376.8207126-578-149473275958340/AnsiballZ_stat.py'
Nov 24 21:56:17 compute-0 sudo[200514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.367 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.367 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.367 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.367 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.367 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.367 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.367 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.368 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.368 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.368 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.368 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.368 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.368 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.368 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.368 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.368 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.368 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.368 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.368 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.368 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.369 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.369 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.369 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.369 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.369 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.369 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.369 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.369 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.369 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.369 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.369 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.369 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.370 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.371 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.372 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.373 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.373 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.373 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.373 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.373 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.373 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.373 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.373 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.373 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.373 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.373 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.373 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.374 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.375 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.375 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.375 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.375 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.375 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.375 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.375 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.375 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.375 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.375 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.375 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.375 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.375 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.376 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.376 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.376 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.376 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.376 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.376 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.376 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.376 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.376 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.376 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.376 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.376 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.376 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.377 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.378 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.378 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.378 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.378 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.378 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.378 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.378 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.378 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.378 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.378 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.378 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.378 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.399 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.399 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.400 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.400 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.400 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.400 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.400 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.400 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.400 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.400 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.400 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.400 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.401 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.401 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.401 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.401 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.401 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.401 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.401 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.401 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.401 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.401 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.401 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.402 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.402 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.402 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.402 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.402 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.402 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.402 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.402 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.402 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.402 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.402 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.402 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.403 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.403 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.403 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.403 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.403 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.403 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.403 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.403 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.403 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.403 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.403 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.403 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.404 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.404 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.404 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.404 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.404 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.404 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.404 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.404 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.404 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.404 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.404 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.405 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.405 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.405 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.405 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.405 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.405 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.405 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.405 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.405 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.405 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.405 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.405 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.405 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.406 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.406 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.406 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.406 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.406 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.406 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.406 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.406 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.406 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.406 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.406 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.406 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.407 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.407 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.407 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.407 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.407 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.407 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.407 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.407 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.407 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.407 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.408 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.408 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.409 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.409 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.409 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.409 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.409 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.409 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.409 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.409 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.409 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.409 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.409 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.409 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.409 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.410 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.410 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.411 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.411 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.411 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.411 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.411 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.411 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.411 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.411 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.411 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.411 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.412 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.412 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.412 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.412 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.412 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.412 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.412 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.412 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.412 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.412 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.412 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.412 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.412 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.413 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.413 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.415 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.418 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.419 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.432 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.442 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.443 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.443 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 24 21:56:17 compute-0 python3.9[200516]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:56:17 compute-0 sudo[200514]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.578 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.578 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.578 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.579 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.579 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.579 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.579 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.579 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.579 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.579 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.579 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.579 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.579 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.579 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.580 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.580 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.580 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.580 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.580 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.580 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.580 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.580 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.580 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.580 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.580 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.581 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.582 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.582 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.582 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.582 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.582 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.582 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.582 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.582 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.582 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.582 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.583 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.584 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.585 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.585 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.585 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.585 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.585 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.585 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.585 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.585 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.585 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.585 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.585 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.586 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.587 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.587 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.587 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.587 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.587 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.587 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.587 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.587 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.587 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.587 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.587 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.587 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.587 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.588 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.589 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.590 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.590 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.590 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.590 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.590 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.590 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.590 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.590 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.590 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.590 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.590 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.590 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.590 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.591 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.591 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.591 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.591 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.591 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.591 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.591 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.594 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.615 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.616 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.616 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.616 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.616 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.617 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.617 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.618 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.618 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.618 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.618 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.618 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.620 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.620 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.621 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.622 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.622 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.625 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.630 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.630 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.631 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.631 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.631 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.631 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.631 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.631 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.631 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.631 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:56:17.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:56:17 compute-0 sudo[200650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcojnewysqgdatjiftaasscirugrotwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021376.8207126-578-149473275958340/AnsiballZ_copy.py'
Nov 24 21:56:17 compute-0 sudo[200650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:18 compute-0 python3.9[200652]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021376.8207126-578-149473275958340/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:56:18 compute-0 sudo[200650]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:19 compute-0 sudo[200802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avzxfvzrgzwcgrujycelmjyqrgmvpbbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021378.5233364-595-226716872076780/AnsiballZ_container_config_data.py'
Nov 24 21:56:19 compute-0 sudo[200802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:19 compute-0 python3.9[200804]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Nov 24 21:56:19 compute-0 sudo[200802]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:19 compute-0 sudo[200954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntnufolzuwsxryhkqhrbunpnsiazkxzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021379.5841255-604-222796643895580/AnsiballZ_container_config_hash.py'
Nov 24 21:56:19 compute-0 sudo[200954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:20 compute-0 python3.9[200956]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 21:56:20 compute-0 sudo[200954]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:20 compute-0 sudo[201106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbxjxqwxklkmynvxnooyzjvcvicwkmut ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021380.5295978-614-87398756547394/AnsiballZ_edpm_container_manage.py'
Nov 24 21:56:20 compute-0 sudo[201106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:21 compute-0 python3[201108]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 21:56:21 compute-0 podman[201143]: 2025-11-24 21:56:21.382112211 +0000 UTC m=+0.060410091 container create c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter)
Nov 24 21:56:21 compute-0 podman[201143]: 2025-11-24 21:56:21.347914282 +0000 UTC m=+0.026212202 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 24 21:56:21 compute-0 python3[201108]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Nov 24 21:56:21 compute-0 sudo[201106]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:22 compute-0 sudo[201332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqhvjcyckgsfbxotridrdqhmbetvdnxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021381.8029351-622-231964063197764/AnsiballZ_stat.py'
Nov 24 21:56:22 compute-0 sudo[201332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:22 compute-0 python3.9[201334]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:56:22 compute-0 sudo[201332]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:23 compute-0 sudo[201486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irhcorugdjucpvojjfghfoomxejmarkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021382.7764304-631-156462242361497/AnsiballZ_file.py'
Nov 24 21:56:23 compute-0 sudo[201486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:23 compute-0 python3.9[201488]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:56:23 compute-0 sudo[201486]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:23 compute-0 sudo[201637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmdmwmfkochnofosrtvreckaufktwjpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021383.4466429-631-132111010543834/AnsiballZ_copy.py'
Nov 24 21:56:24 compute-0 sudo[201637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:24 compute-0 python3.9[201639]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764021383.4466429-631-132111010543834/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:56:24 compute-0 sudo[201637]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:24 compute-0 sudo[201713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roiksbhqgqmonxtxlyhuhqqxxxfxlsui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021383.4466429-631-132111010543834/AnsiballZ_systemd.py'
Nov 24 21:56:24 compute-0 sudo[201713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:24 compute-0 python3.9[201715]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:56:24 compute-0 systemd[1]: Reloading.
Nov 24 21:56:25 compute-0 systemd-rc-local-generator[201745]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:56:25 compute-0 systemd-sysv-generator[201748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:56:25 compute-0 sudo[201713]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:25 compute-0 sudo[201825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psrekxvyvtbatkgxrnqycrnagbvfkeai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021383.4466429-631-132111010543834/AnsiballZ_systemd.py'
Nov 24 21:56:25 compute-0 sudo[201825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:25 compute-0 python3.9[201827]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:56:26 compute-0 systemd[1]: Reloading.
Nov 24 21:56:26 compute-0 systemd-rc-local-generator[201858]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:56:26 compute-0 systemd-sysv-generator[201861]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:56:26 compute-0 systemd[1]: Starting node_exporter container...
Nov 24 21:56:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d45716313590b08c897ed370bff1005825fb9b7c2ad766b40b1a40233c62e94/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d45716313590b08c897ed370bff1005825fb9b7c2ad766b40b1a40233c62e94/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:26 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0.
Nov 24 21:56:26 compute-0 podman[201868]: 2025-11-24 21:56:26.593952469 +0000 UTC m=+0.155084331 container init c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.612Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.612Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.612Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.612Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.612Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.612Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.612Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=arp
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=bcache
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=bonding
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=cpu
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=edac
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=filefd
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=netclass
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=netdev
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=netstat
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=nfs
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=nvme
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=softnet
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=systemd
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=xfs
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.613Z caller=node_exporter.go:117 level=info collector=zfs
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.614Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 24 21:56:26 compute-0 node_exporter[201884]: ts=2025-11-24T21:56:26.615Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 24 21:56:26 compute-0 podman[201868]: 2025-11-24 21:56:26.627780786 +0000 UTC m=+0.188912688 container start c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 21:56:26 compute-0 podman[201868]: node_exporter
Nov 24 21:56:26 compute-0 systemd[1]: Started node_exporter container.
Nov 24 21:56:26 compute-0 sudo[201825]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:26 compute-0 podman[201893]: 2025-11-24 21:56:26.709635509 +0000 UTC m=+0.069234924 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 21:56:27 compute-0 sshd-session[201905]: Invalid user sol from 45.148.10.240 port 39300
Nov 24 21:56:27 compute-0 sshd-session[201905]: Connection closed by invalid user sol 45.148.10.240 port 39300 [preauth]
Nov 24 21:56:27 compute-0 sudo[202068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlceyypyslaxenkzfygdfvdjbqudhiyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021386.9488637-655-55477628677760/AnsiballZ_systemd.py'
Nov 24 21:56:27 compute-0 sudo[202068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:27 compute-0 python3.9[202070]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:56:27 compute-0 systemd[1]: Stopping node_exporter container...
Nov 24 21:56:27 compute-0 systemd[1]: libpod-c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0.scope: Deactivated successfully.
Nov 24 21:56:27 compute-0 podman[202074]: 2025-11-24 21:56:27.755992252 +0000 UTC m=+0.075841508 container died c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 21:56:27 compute-0 systemd[1]: c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0-50c14b4b9039c086.timer: Deactivated successfully.
Nov 24 21:56:27 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0.
Nov 24 21:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0-userdata-shm.mount: Deactivated successfully.
Nov 24 21:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d45716313590b08c897ed370bff1005825fb9b7c2ad766b40b1a40233c62e94-merged.mount: Deactivated successfully.
Nov 24 21:56:27 compute-0 podman[202074]: 2025-11-24 21:56:27.807384053 +0000 UTC m=+0.127233269 container cleanup c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 21:56:27 compute-0 podman[202074]: node_exporter
Nov 24 21:56:27 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 24 21:56:27 compute-0 podman[202100]: node_exporter
Nov 24 21:56:27 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Nov 24 21:56:27 compute-0 systemd[1]: Stopped node_exporter container.
Nov 24 21:56:27 compute-0 systemd[1]: Starting node_exporter container...
Nov 24 21:56:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d45716313590b08c897ed370bff1005825fb9b7c2ad766b40b1a40233c62e94/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d45716313590b08c897ed370bff1005825fb9b7c2ad766b40b1a40233c62e94/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:28 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0.
Nov 24 21:56:28 compute-0 podman[202113]: 2025-11-24 21:56:28.080506015 +0000 UTC m=+0.143491252 container init c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.094Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.094Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.094Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.095Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.095Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.095Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=arp
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=bcache
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=bonding
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=cpu
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=edac
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=filefd
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=netclass
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=netdev
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=netstat
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=nfs
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=nvme
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=softnet
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=systemd
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=xfs
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.096Z caller=node_exporter.go:117 level=info collector=zfs
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.097Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 24 21:56:28 compute-0 node_exporter[202128]: ts=2025-11-24T21:56:28.097Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 24 21:56:28 compute-0 podman[202113]: 2025-11-24 21:56:28.121500074 +0000 UTC m=+0.184485291 container start c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 21:56:28 compute-0 podman[202113]: node_exporter
Nov 24 21:56:28 compute-0 systemd[1]: Started node_exporter container.
Nov 24 21:56:28 compute-0 sudo[202068]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:28 compute-0 podman[202137]: 2025-11-24 21:56:28.215433571 +0000 UTC m=+0.076879121 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 21:56:28 compute-0 sudo[202310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjtaokaipawtscupkxbrdssfxpslrdbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021388.3927455-663-247490450663462/AnsiballZ_stat.py'
Nov 24 21:56:28 compute-0 sudo[202310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:29 compute-0 python3.9[202312]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:56:29 compute-0 sudo[202310]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:29 compute-0 sudo[202433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyhsugghjphtrvwmurvvdlviydkftjma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021388.3927455-663-247490450663462/AnsiballZ_copy.py'
Nov 24 21:56:29 compute-0 sudo[202433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:29 compute-0 python3.9[202435]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021388.3927455-663-247490450663462/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:56:29 compute-0 sudo[202433]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:30 compute-0 sudo[202585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmmokvistcnbblfkvffvsuozwrhehqfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021390.1649723-680-92927329293568/AnsiballZ_container_config_data.py'
Nov 24 21:56:30 compute-0 sudo[202585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:30 compute-0 python3.9[202587]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Nov 24 21:56:30 compute-0 sudo[202585]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:31 compute-0 sudo[202760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enxmlojtxwlpltpbwfroazlacwnvwudw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021391.0610409-689-215025369163565/AnsiballZ_container_config_hash.py'
Nov 24 21:56:31 compute-0 sudo[202760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:31 compute-0 podman[202712]: 2025-11-24 21:56:31.481546062 +0000 UTC m=+0.112928485 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 24 21:56:31 compute-0 podman[202711]: 2025-11-24 21:56:31.493471321 +0000 UTC m=+0.129540200 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:56:31 compute-0 python3.9[202770]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 21:56:31 compute-0 sudo[202760]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:32 compute-0 sudo[202931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nggtdlxxmsydlarlohmkynudusxtacyc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021392.0021956-699-207385863790772/AnsiballZ_edpm_container_manage.py'
Nov 24 21:56:32 compute-0 sudo[202931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:32 compute-0 python3[202933]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 21:56:34 compute-0 podman[202947]: 2025-11-24 21:56:34.023407108 +0000 UTC m=+1.303466460 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 24 21:56:34 compute-0 podman[203045]: 2025-11-24 21:56:34.169138868 +0000 UTC m=+0.058063527 container create 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible)
Nov 24 21:56:34 compute-0 podman[203045]: 2025-11-24 21:56:34.135524898 +0000 UTC m=+0.024449607 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 24 21:56:34 compute-0 python3[202933]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Nov 24 21:56:34 compute-0 sudo[202931]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:34 compute-0 sudo[203233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jogkuiooetbmizggqnbdijfgrxxgbeso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021394.6209464-707-83479251142315/AnsiballZ_stat.py'
Nov 24 21:56:34 compute-0 sudo[203233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:35 compute-0 python3.9[203235]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:56:35 compute-0 sudo[203233]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:36 compute-0 sudo[203387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufzfpqdmjnimndvbzozrpmqttuyovfmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021395.622773-716-207769777459481/AnsiballZ_file.py'
Nov 24 21:56:36 compute-0 sudo[203387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:36 compute-0 python3.9[203389]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:56:36 compute-0 sudo[203387]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:36 compute-0 sudo[203538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdtpzypnhkxpotezmjkabyufpyhmfftx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021396.304826-716-103400073487515/AnsiballZ_copy.py'
Nov 24 21:56:36 compute-0 sudo[203538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:37 compute-0 python3.9[203540]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764021396.304826-716-103400073487515/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:56:37 compute-0 sudo[203538]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:37 compute-0 sudo[203614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyfgckeqaozmirpdgbnjtuqxrfwyzrsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021396.304826-716-103400073487515/AnsiballZ_systemd.py'
Nov 24 21:56:37 compute-0 sudo[203614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:37 compute-0 python3.9[203616]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:56:37 compute-0 systemd[1]: Reloading.
Nov 24 21:56:37 compute-0 systemd-rc-local-generator[203645]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:56:37 compute-0 systemd-sysv-generator[203650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:56:38 compute-0 sudo[203614]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:38 compute-0 sudo[203725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thuffnsszlqlfbebjswlskqqzzgzvswv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021396.304826-716-103400073487515/AnsiballZ_systemd.py'
Nov 24 21:56:38 compute-0 sudo[203725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:38 compute-0 python3.9[203727]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:56:38 compute-0 systemd[1]: Reloading.
Nov 24 21:56:39 compute-0 systemd-rc-local-generator[203758]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:56:39 compute-0 systemd-sysv-generator[203761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:56:39 compute-0 systemd[1]: Starting podman_exporter container...
Nov 24 21:56:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:56:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d141fae4179179fbd7c6333a78860aca2cf4ffba3e95293e9531f42d8cb4f080/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d141fae4179179fbd7c6333a78860aca2cf4ffba3e95293e9531f42d8cb4f080/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:39 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27.
Nov 24 21:56:39 compute-0 podman[203768]: 2025-11-24 21:56:39.44181537 +0000 UTC m=+0.181618632 container init 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 21:56:39 compute-0 podman_exporter[203784]: ts=2025-11-24T21:56:39.466Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 24 21:56:39 compute-0 podman_exporter[203784]: ts=2025-11-24T21:56:39.467Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 24 21:56:39 compute-0 podman_exporter[203784]: ts=2025-11-24T21:56:39.467Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 24 21:56:39 compute-0 podman_exporter[203784]: ts=2025-11-24T21:56:39.467Z caller=handler.go:105 level=info collector=container
Nov 24 21:56:39 compute-0 podman[203768]: 2025-11-24 21:56:39.47572614 +0000 UTC m=+0.215529362 container start 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 21:56:39 compute-0 podman[203768]: podman_exporter
Nov 24 21:56:39 compute-0 systemd[1]: Starting Podman API Service...
Nov 24 21:56:39 compute-0 systemd[1]: Started podman_exporter container.
Nov 24 21:56:39 compute-0 systemd[1]: Started Podman API Service.
Nov 24 21:56:39 compute-0 podman[203795]: time="2025-11-24T21:56:39Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 24 21:56:39 compute-0 podman[203795]: time="2025-11-24T21:56:39Z" level=info msg="Setting parallel job count to 25"
Nov 24 21:56:39 compute-0 podman[203795]: time="2025-11-24T21:56:39Z" level=info msg="Using sqlite as database backend"
Nov 24 21:56:39 compute-0 podman[203795]: time="2025-11-24T21:56:39Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 24 21:56:39 compute-0 sudo[203725]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:39 compute-0 podman[203795]: time="2025-11-24T21:56:39Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 24 21:56:39 compute-0 podman[203795]: time="2025-11-24T21:56:39Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 24 21:56:39 compute-0 podman[203795]: @ - - [24/Nov/2025:21:56:39 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 24 21:56:39 compute-0 podman[203795]: time="2025-11-24T21:56:39Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 21:56:39 compute-0 podman[203795]: @ - - [24/Nov/2025:21:56:39 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19588 "" "Go-http-client/1.1"
Nov 24 21:56:39 compute-0 podman_exporter[203784]: ts=2025-11-24T21:56:39.589Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 24 21:56:39 compute-0 podman_exporter[203784]: ts=2025-11-24T21:56:39.589Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 24 21:56:39 compute-0 podman_exporter[203784]: ts=2025-11-24T21:56:39.590Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 24 21:56:39 compute-0 podman[203793]: 2025-11-24 21:56:39.591714519 +0000 UTC m=+0.094534327 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 21:56:39 compute-0 systemd[1]: 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27-1bb20696a94598ce.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 21:56:39 compute-0 systemd[1]: 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27-1bb20696a94598ce.service: Failed with result 'exit-code'.
Nov 24 21:56:40 compute-0 sudo[203981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdrnabehmowubonciqmviqoovoxgjhto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021399.801493-740-203917280671038/AnsiballZ_systemd.py'
Nov 24 21:56:40 compute-0 sudo[203981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:40 compute-0 sshd-session[203907]: Invalid user validator from 193.32.162.145 port 40080
Nov 24 21:56:40 compute-0 sshd-session[203907]: Connection closed by invalid user validator 193.32.162.145 port 40080 [preauth]
Nov 24 21:56:40 compute-0 python3.9[203983]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:56:40 compute-0 systemd[1]: Stopping podman_exporter container...
Nov 24 21:56:40 compute-0 podman[203795]: @ - - [24/Nov/2025:21:56:39 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Nov 24 21:56:40 compute-0 systemd[1]: libpod-9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27.scope: Deactivated successfully.
Nov 24 21:56:40 compute-0 conmon[203784]: conmon 9d00b43530e24fb4754a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27.scope/container/memory.events
Nov 24 21:56:40 compute-0 podman[203987]: 2025-11-24 21:56:40.69974255 +0000 UTC m=+0.072016469 container died 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 21:56:40 compute-0 systemd[1]: 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27-1bb20696a94598ce.timer: Deactivated successfully.
Nov 24 21:56:40 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27.
Nov 24 21:56:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27-userdata-shm.mount: Deactivated successfully.
Nov 24 21:56:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d141fae4179179fbd7c6333a78860aca2cf4ffba3e95293e9531f42d8cb4f080-merged.mount: Deactivated successfully.
Nov 24 21:56:40 compute-0 podman[203987]: 2025-11-24 21:56:40.919916775 +0000 UTC m=+0.292190664 container cleanup 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 21:56:40 compute-0 podman[203987]: podman_exporter
Nov 24 21:56:40 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 24 21:56:41 compute-0 podman[204017]: podman_exporter
Nov 24 21:56:41 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Nov 24 21:56:41 compute-0 systemd[1]: Stopped podman_exporter container.
Nov 24 21:56:41 compute-0 systemd[1]: Starting podman_exporter container...
Nov 24 21:56:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d141fae4179179fbd7c6333a78860aca2cf4ffba3e95293e9531f42d8cb4f080/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d141fae4179179fbd7c6333a78860aca2cf4ffba3e95293e9531f42d8cb4f080/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:41 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27.
Nov 24 21:56:41 compute-0 podman[204030]: 2025-11-24 21:56:41.216213104 +0000 UTC m=+0.164456250 container init 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 21:56:41 compute-0 podman_exporter[204045]: ts=2025-11-24T21:56:41.237Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 24 21:56:41 compute-0 podman_exporter[204045]: ts=2025-11-24T21:56:41.237Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 24 21:56:41 compute-0 podman_exporter[204045]: ts=2025-11-24T21:56:41.237Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 24 21:56:41 compute-0 podman_exporter[204045]: ts=2025-11-24T21:56:41.237Z caller=handler.go:105 level=info collector=container
Nov 24 21:56:41 compute-0 podman[203795]: @ - - [24/Nov/2025:21:56:41 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 24 21:56:41 compute-0 podman[203795]: time="2025-11-24T21:56:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 21:56:41 compute-0 podman[204030]: 2025-11-24 21:56:41.248639318 +0000 UTC m=+0.196882464 container start 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 21:56:41 compute-0 podman[204030]: podman_exporter
Nov 24 21:56:41 compute-0 systemd[1]: Started podman_exporter container.
Nov 24 21:56:41 compute-0 podman[203795]: @ - - [24/Nov/2025:21:56:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19590 "" "Go-http-client/1.1"
Nov 24 21:56:41 compute-0 podman_exporter[204045]: ts=2025-11-24T21:56:41.271Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 24 21:56:41 compute-0 podman_exporter[204045]: ts=2025-11-24T21:56:41.271Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 24 21:56:41 compute-0 podman_exporter[204045]: ts=2025-11-24T21:56:41.273Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 24 21:56:41 compute-0 sudo[203981]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:41 compute-0 podman[204055]: 2025-11-24 21:56:41.334394302 +0000 UTC m=+0.066195340 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 21:56:41 compute-0 sudo[204229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrvetkhkyujndmhkigpgjydorickeccm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021401.4960544-748-32022836367789/AnsiballZ_stat.py'
Nov 24 21:56:41 compute-0 sudo[204229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:42 compute-0 python3.9[204231]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:56:42 compute-0 sudo[204229]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:42 compute-0 sudo[204352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dweecrgmrgitoeqwzubztonufquljbzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021401.4960544-748-32022836367789/AnsiballZ_copy.py'
Nov 24 21:56:42 compute-0 sudo[204352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:42 compute-0 python3.9[204354]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021401.4960544-748-32022836367789/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:56:42 compute-0 sudo[204352]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:43 compute-0 sudo[204504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncsjtsuodliydhhivhjvgwrnhxdemtjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021403.1631923-765-211662933548684/AnsiballZ_container_config_data.py'
Nov 24 21:56:43 compute-0 sudo[204504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:43 compute-0 python3.9[204506]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Nov 24 21:56:43 compute-0 sudo[204504]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:44 compute-0 sudo[204656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzimwqonzfabciwpvwezmghlzltohekw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021403.9062023-774-94484997392270/AnsiballZ_container_config_hash.py'
Nov 24 21:56:44 compute-0 sudo[204656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:44 compute-0 python3.9[204658]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 21:56:44 compute-0 sudo[204656]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:45 compute-0 sudo[204821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qptbswviyxtjezjxtgzhafkoioweiljp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021404.7892082-784-90999666043205/AnsiballZ_edpm_container_manage.py'
Nov 24 21:56:45 compute-0 sudo[204821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:45 compute-0 podman[204782]: 2025-11-24 21:56:45.228666843 +0000 UTC m=+0.101989088 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Nov 24 21:56:45 compute-0 python3[204827]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 21:56:48 compute-0 podman[204888]: 2025-11-24 21:56:48.070993049 +0000 UTC m=+0.920749618 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 21:56:48 compute-0 systemd[1]: a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d-5f631cf2ae6d88a4.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 21:56:48 compute-0 systemd[1]: a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d-5f631cf2ae6d88a4.service: Failed with result 'exit-code'.
Nov 24 21:56:48 compute-0 podman[204845]: 2025-11-24 21:56:48.445104746 +0000 UTC m=+2.883161209 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 24 21:56:48 compute-0 podman[204960]: 2025-11-24 21:56:48.610462834 +0000 UTC m=+0.057129879 container create 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vcs-type=git, name=ubi9-minimal, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9)
Nov 24 21:56:48 compute-0 podman[204960]: 2025-11-24 21:56:48.581355073 +0000 UTC m=+0.028022118 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 24 21:56:48 compute-0 python3[204827]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 24 21:56:48 compute-0 sudo[204821]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:49 compute-0 sudo[205149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehthozeskrnnkrgcrhmpufptaywatnir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021408.9485521-792-141723137839289/AnsiballZ_stat.py'
Nov 24 21:56:49 compute-0 sudo[205149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:49 compute-0 python3.9[205151]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:56:49 compute-0 sudo[205149]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:50 compute-0 sudo[205303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liovpofonknbqmipbysbteaevsdyiudd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021409.855605-801-144352165484027/AnsiballZ_file.py'
Nov 24 21:56:50 compute-0 sudo[205303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:50 compute-0 python3.9[205305]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:56:50 compute-0 sudo[205303]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:51 compute-0 sudo[205454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjcjcsvbsskayowcowxrvxuemhlubxqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021410.555292-801-140517002099795/AnsiballZ_copy.py'
Nov 24 21:56:51 compute-0 sudo[205454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:51 compute-0 python3.9[205456]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764021410.555292-801-140517002099795/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:56:51 compute-0 sudo[205454]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:51 compute-0 sudo[205530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efolzcrzddbrsbcjsdufjyssaswzsmdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021410.555292-801-140517002099795/AnsiballZ_systemd.py'
Nov 24 21:56:51 compute-0 sudo[205530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:51 compute-0 python3.9[205532]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:56:51 compute-0 systemd[1]: Reloading.
Nov 24 21:56:52 compute-0 systemd-rc-local-generator[205561]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:56:52 compute-0 systemd-sysv-generator[205564]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:56:52 compute-0 sudo[205530]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:52 compute-0 sudo[205642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaonqacgqmxttepyqkamuxskipsynibu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021410.555292-801-140517002099795/AnsiballZ_systemd.py'
Nov 24 21:56:52 compute-0 sudo[205642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:53 compute-0 python3.9[205644]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:56:53 compute-0 systemd[1]: Reloading.
Nov 24 21:56:53 compute-0 systemd-sysv-generator[205675]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:56:53 compute-0 systemd-rc-local-generator[205672]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:56:53 compute-0 systemd[1]: Starting openstack_network_exporter container...
Nov 24 21:56:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:56:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fac55f9418ebc0be43e86d97c5be3a02fe203531a846c15a018ea7b3775c13/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fac55f9418ebc0be43e86d97c5be3a02fe203531a846c15a018ea7b3775c13/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fac55f9418ebc0be43e86d97c5be3a02fe203531a846c15a018ea7b3775c13/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:53 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a.
Nov 24 21:56:53 compute-0 podman[205683]: 2025-11-24 21:56:53.542875052 +0000 UTC m=+0.136347941 container init 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., version=9.6, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 24 21:56:53 compute-0 openstack_network_exporter[205699]: INFO    21:56:53 main.go:48: registering *bridge.Collector
Nov 24 21:56:53 compute-0 openstack_network_exporter[205699]: INFO    21:56:53 main.go:48: registering *coverage.Collector
Nov 24 21:56:53 compute-0 openstack_network_exporter[205699]: INFO    21:56:53 main.go:48: registering *datapath.Collector
Nov 24 21:56:53 compute-0 openstack_network_exporter[205699]: INFO    21:56:53 main.go:48: registering *iface.Collector
Nov 24 21:56:53 compute-0 openstack_network_exporter[205699]: INFO    21:56:53 main.go:48: registering *memory.Collector
Nov 24 21:56:53 compute-0 openstack_network_exporter[205699]: INFO    21:56:53 main.go:48: registering *ovnnorthd.Collector
Nov 24 21:56:53 compute-0 openstack_network_exporter[205699]: INFO    21:56:53 main.go:48: registering *ovn.Collector
Nov 24 21:56:53 compute-0 openstack_network_exporter[205699]: INFO    21:56:53 main.go:48: registering *ovsdbserver.Collector
Nov 24 21:56:53 compute-0 openstack_network_exporter[205699]: INFO    21:56:53 main.go:48: registering *pmd_perf.Collector
Nov 24 21:56:53 compute-0 openstack_network_exporter[205699]: INFO    21:56:53 main.go:48: registering *pmd_rxq.Collector
Nov 24 21:56:53 compute-0 openstack_network_exporter[205699]: INFO    21:56:53 main.go:48: registering *vswitch.Collector
Nov 24 21:56:53 compute-0 openstack_network_exporter[205699]: NOTICE  21:56:53 main.go:76: listening on https://:9105/metrics
Nov 24 21:56:53 compute-0 podman[205683]: 2025-11-24 21:56:53.578696715 +0000 UTC m=+0.172169564 container start 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 21:56:53 compute-0 podman[205683]: openstack_network_exporter
Nov 24 21:56:53 compute-0 systemd[1]: Started openstack_network_exporter container.
Nov 24 21:56:53 compute-0 sudo[205642]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:53 compute-0 podman[205709]: 2025-11-24 21:56:53.686747361 +0000 UTC m=+0.091416279 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., distribution-scope=public)
Nov 24 21:56:54 compute-0 sudo[205881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lteiuvnoeowrtztwanlhnxnfvvcnxcjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021413.8402255-825-250961829718862/AnsiballZ_systemd.py'
Nov 24 21:56:54 compute-0 sudo[205881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:54 compute-0 python3.9[205883]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:56:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:56:54.546 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:56:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:56:54.546 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:56:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:56:54.547 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:56:54 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Nov 24 21:56:54 compute-0 systemd[1]: libpod-366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a.scope: Deactivated successfully.
Nov 24 21:56:54 compute-0 podman[205887]: 2025-11-24 21:56:54.682624424 +0000 UTC m=+0.064424288 container died 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, release=1755695350, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 21:56:54 compute-0 systemd[1]: 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a-65ed2018f1b1adbf.timer: Deactivated successfully.
Nov 24 21:56:54 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a.
Nov 24 21:56:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a-userdata-shm.mount: Deactivated successfully.
Nov 24 21:56:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-31fac55f9418ebc0be43e86d97c5be3a02fe203531a846c15a018ea7b3775c13-merged.mount: Deactivated successfully.
Nov 24 21:56:55 compute-0 podman[205887]: 2025-11-24 21:56:55.318703423 +0000 UTC m=+0.700503287 container cleanup 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, build-date=2025-08-20T13:12:41, version=9.6, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Nov 24 21:56:55 compute-0 podman[205887]: openstack_network_exporter
Nov 24 21:56:55 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 24 21:56:55 compute-0 podman[205917]: openstack_network_exporter
Nov 24 21:56:55 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Nov 24 21:56:55 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Nov 24 21:56:55 compute-0 systemd[1]: Starting openstack_network_exporter container...
Nov 24 21:56:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fac55f9418ebc0be43e86d97c5be3a02fe203531a846c15a018ea7b3775c13/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fac55f9418ebc0be43e86d97c5be3a02fe203531a846c15a018ea7b3775c13/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fac55f9418ebc0be43e86d97c5be3a02fe203531a846c15a018ea7b3775c13/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 24 21:56:55 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a.
Nov 24 21:56:55 compute-0 podman[205930]: 2025-11-24 21:56:55.603535493 +0000 UTC m=+0.163574019 container init 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal)
Nov 24 21:56:55 compute-0 openstack_network_exporter[205945]: INFO    21:56:55 main.go:48: registering *bridge.Collector
Nov 24 21:56:55 compute-0 openstack_network_exporter[205945]: INFO    21:56:55 main.go:48: registering *coverage.Collector
Nov 24 21:56:55 compute-0 openstack_network_exporter[205945]: INFO    21:56:55 main.go:48: registering *datapath.Collector
Nov 24 21:56:55 compute-0 openstack_network_exporter[205945]: INFO    21:56:55 main.go:48: registering *iface.Collector
Nov 24 21:56:55 compute-0 openstack_network_exporter[205945]: INFO    21:56:55 main.go:48: registering *memory.Collector
Nov 24 21:56:55 compute-0 openstack_network_exporter[205945]: INFO    21:56:55 main.go:48: registering *ovnnorthd.Collector
Nov 24 21:56:55 compute-0 openstack_network_exporter[205945]: INFO    21:56:55 main.go:48: registering *ovn.Collector
Nov 24 21:56:55 compute-0 openstack_network_exporter[205945]: INFO    21:56:55 main.go:48: registering *ovsdbserver.Collector
Nov 24 21:56:55 compute-0 openstack_network_exporter[205945]: INFO    21:56:55 main.go:48: registering *pmd_perf.Collector
Nov 24 21:56:55 compute-0 openstack_network_exporter[205945]: INFO    21:56:55 main.go:48: registering *pmd_rxq.Collector
Nov 24 21:56:55 compute-0 openstack_network_exporter[205945]: INFO    21:56:55 main.go:48: registering *vswitch.Collector
Nov 24 21:56:55 compute-0 openstack_network_exporter[205945]: NOTICE  21:56:55 main.go:76: listening on https://:9105/metrics
Nov 24 21:56:55 compute-0 podman[205930]: 2025-11-24 21:56:55.639223652 +0000 UTC m=+0.199262098 container start 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal)
Nov 24 21:56:55 compute-0 podman[205930]: openstack_network_exporter
Nov 24 21:56:55 compute-0 systemd[1]: Started openstack_network_exporter container.
Nov 24 21:56:55 compute-0 sudo[205881]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:55 compute-0 podman[205955]: 2025-11-24 21:56:55.735077656 +0000 UTC m=+0.079530605 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, config_id=edpm, build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 21:56:56 compute-0 sudo[206127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfdinynmoeojxuqpyyurshssybvsbsez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021415.9109445-833-117138092981765/AnsiballZ_find.py'
Nov 24 21:56:56 compute-0 sudo[206127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:56 compute-0 python3.9[206129]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 21:56:56 compute-0 sudo[206127]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:57 compute-0 sudo[206279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovnppgfrzqmogrgnfbjpalnlvvwyzkee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021416.9774773-843-187878147602809/AnsiballZ_podman_container_info.py'
Nov 24 21:56:57 compute-0 sudo[206279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:57 compute-0 python3.9[206281]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 24 21:56:57 compute-0 sudo[206279]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:58 compute-0 podman[206394]: 2025-11-24 21:56:58.5504971 +0000 UTC m=+0.075095408 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 21:56:58 compute-0 sudo[206469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzlkditzdfdbsvqzushwivrqhgtxsnqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021418.1001961-851-61577587274557/AnsiballZ_podman_container_exec.py'
Nov 24 21:56:58 compute-0 sudo[206469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:58 compute-0 python3.9[206471]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:56:58 compute-0 systemd[1]: Started libpod-conmon-d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94.scope.
Nov 24 21:56:58 compute-0 podman[206472]: 2025-11-24 21:56:58.946178679 +0000 UTC m=+0.076927807 container exec d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 21:56:58 compute-0 podman[206472]: 2025-11-24 21:56:58.984700693 +0000 UTC m=+0.115449791 container exec_died d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:56:59 compute-0 systemd[1]: libpod-conmon-d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94.scope: Deactivated successfully.
Nov 24 21:56:59 compute-0 sudo[206469]: pam_unix(sudo:session): session closed for user root
Nov 24 21:56:59 compute-0 sudo[206653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wurgpeoofzywehyddhtvuuimrohfolml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021419.2257612-859-277185947468145/AnsiballZ_podman_container_exec.py'
Nov 24 21:56:59 compute-0 sudo[206653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:56:59 compute-0 python3.9[206655]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:56:59 compute-0 systemd[1]: Started libpod-conmon-d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94.scope.
Nov 24 21:56:59 compute-0 podman[206656]: 2025-11-24 21:56:59.956975675 +0000 UTC m=+0.121709366 container exec d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 21:56:59 compute-0 podman[206656]: 2025-11-24 21:56:59.995791407 +0000 UTC m=+0.160525088 container exec_died d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:57:00 compute-0 systemd[1]: libpod-conmon-d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94.scope: Deactivated successfully.
Nov 24 21:57:00 compute-0 sudo[206653]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:00 compute-0 sudo[206838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lucffokgyngcufoccrwtoexgvfokzdvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021420.3048236-867-261897851225273/AnsiballZ_file.py'
Nov 24 21:57:00 compute-0 sudo[206838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:00 compute-0 python3.9[206840]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:00 compute-0 sudo[206838]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:01 compute-0 sudo[206990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrhoxdxbjojvqdauftfbuuinscsucfwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021421.1398222-876-105580889849219/AnsiballZ_podman_container_info.py'
Nov 24 21:57:01 compute-0 sudo[206990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:01 compute-0 podman[206993]: 2025-11-24 21:57:01.964949068 +0000 UTC m=+0.387642478 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 24 21:57:02 compute-0 podman[206992]: 2025-11-24 21:57:02.015829527 +0000 UTC m=+0.438526067 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 21:57:02 compute-0 python3.9[206994]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 24 21:57:02 compute-0 sudo[206990]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:02 compute-0 sudo[207199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auzugeyknouvwlharbkgrhynxpevcvhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021422.4182284-884-164094196579039/AnsiballZ_podman_container_exec.py'
Nov 24 21:57:02 compute-0 sudo[207199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:02 compute-0 python3.9[207201]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:57:03 compute-0 systemd[1]: Started libpod-conmon-fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6.scope.
Nov 24 21:57:03 compute-0 podman[207202]: 2025-11-24 21:57:03.081889049 +0000 UTC m=+0.086020467 container exec fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 24 21:57:03 compute-0 podman[207202]: 2025-11-24 21:57:03.112198557 +0000 UTC m=+0.116330005 container exec_died fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 21:57:03 compute-0 systemd[1]: libpod-conmon-fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6.scope: Deactivated successfully.
Nov 24 21:57:03 compute-0 nova_compute[189608]: 2025-11-24 21:57:03.139 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:57:03 compute-0 nova_compute[189608]: 2025-11-24 21:57:03.141 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:57:03 compute-0 nova_compute[189608]: 2025-11-24 21:57:03.159 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:57:03 compute-0 nova_compute[189608]: 2025-11-24 21:57:03.160 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:57:03 compute-0 nova_compute[189608]: 2025-11-24 21:57:03.160 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:57:03 compute-0 sudo[207199]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:03 compute-0 sudo[207384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ituicofdwjftxesgdrcsunofjksnqwhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021423.3700705-892-36485818630689/AnsiballZ_podman_container_exec.py'
Nov 24 21:57:03 compute-0 sudo[207384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:03 compute-0 nova_compute[189608]: 2025-11-24 21:57:03.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:57:03 compute-0 nova_compute[189608]: 2025-11-24 21:57:03.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 21:57:03 compute-0 nova_compute[189608]: 2025-11-24 21:57:03.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 21:57:03 compute-0 nova_compute[189608]: 2025-11-24 21:57:03.810 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 21:57:03 compute-0 nova_compute[189608]: 2025-11-24 21:57:03.811 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:57:04 compute-0 python3.9[207386]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:57:04 compute-0 systemd[1]: Started libpod-conmon-fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6.scope.
Nov 24 21:57:04 compute-0 podman[207387]: 2025-11-24 21:57:04.120486786 +0000 UTC m=+0.092504116 container exec fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 21:57:04 compute-0 podman[207387]: 2025-11-24 21:57:04.155715164 +0000 UTC m=+0.127732494 container exec_died fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:57:04 compute-0 systemd[1]: libpod-conmon-fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6.scope: Deactivated successfully.
Nov 24 21:57:04 compute-0 sudo[207384]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:04 compute-0 sudo[207569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkmjkpoadokcwifddbvmsoivktmeiugm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021424.393873-900-275369840580197/AnsiballZ_file.py'
Nov 24 21:57:04 compute-0 sudo[207569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:04 compute-0 nova_compute[189608]: 2025-11-24 21:57:04.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:57:04 compute-0 nova_compute[189608]: 2025-11-24 21:57:04.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:57:04 compute-0 nova_compute[189608]: 2025-11-24 21:57:04.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 21:57:04 compute-0 nova_compute[189608]: 2025-11-24 21:57:04.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:57:04 compute-0 nova_compute[189608]: 2025-11-24 21:57:04.821 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:57:04 compute-0 nova_compute[189608]: 2025-11-24 21:57:04.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:57:04 compute-0 nova_compute[189608]: 2025-11-24 21:57:04.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:57:04 compute-0 nova_compute[189608]: 2025-11-24 21:57:04.822 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 21:57:04 compute-0 python3.9[207571]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:04 compute-0 sudo[207569]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:05 compute-0 nova_compute[189608]: 2025-11-24 21:57:05.017 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 21:57:05 compute-0 nova_compute[189608]: 2025-11-24 21:57:05.018 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5895MB free_disk=72.26178359985352GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 21:57:05 compute-0 nova_compute[189608]: 2025-11-24 21:57:05.019 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:57:05 compute-0 nova_compute[189608]: 2025-11-24 21:57:05.019 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:57:05 compute-0 nova_compute[189608]: 2025-11-24 21:57:05.161 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 21:57:05 compute-0 nova_compute[189608]: 2025-11-24 21:57:05.162 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 21:57:05 compute-0 nova_compute[189608]: 2025-11-24 21:57:05.195 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 21:57:05 compute-0 nova_compute[189608]: 2025-11-24 21:57:05.213 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 21:57:05 compute-0 nova_compute[189608]: 2025-11-24 21:57:05.215 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 21:57:05 compute-0 nova_compute[189608]: 2025-11-24 21:57:05.216 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:57:05 compute-0 sudo[207721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puxfbqiwygopjeqqisbushkbbrtevvfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021425.2840447-909-42947604091693/AnsiballZ_podman_container_info.py'
Nov 24 21:57:05 compute-0 sudo[207721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:05 compute-0 python3.9[207723]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Nov 24 21:57:06 compute-0 sudo[207721]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:06 compute-0 sudo[207886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgrswuvsoxzrnnxwfzkixxorlgmmrrai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021426.289031-917-270096767603962/AnsiballZ_podman_container_exec.py'
Nov 24 21:57:06 compute-0 sudo[207886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:06 compute-0 python3.9[207888]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:57:06 compute-0 systemd[1]: Started libpod-conmon-5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe.scope.
Nov 24 21:57:06 compute-0 podman[207889]: 2025-11-24 21:57:06.995156691 +0000 UTC m=+0.101679849 container exec 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:57:07 compute-0 podman[207889]: 2025-11-24 21:57:07.031946869 +0000 UTC m=+0.138470047 container exec_died 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 21:57:07 compute-0 systemd[1]: libpod-conmon-5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe.scope: Deactivated successfully.
Nov 24 21:57:07 compute-0 sudo[207886]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:07 compute-0 sudo[208071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgwdkznsdjlxwybrhtaxaljkhcerywan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021427.3623328-925-126253615067509/AnsiballZ_podman_container_exec.py'
Nov 24 21:57:07 compute-0 sudo[208071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:07 compute-0 python3.9[208073]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:57:08 compute-0 systemd[1]: Started libpod-conmon-5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe.scope.
Nov 24 21:57:08 compute-0 podman[208074]: 2025-11-24 21:57:08.11509709 +0000 UTC m=+0.096790970 container exec 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 21:57:08 compute-0 podman[208074]: 2025-11-24 21:57:08.148933771 +0000 UTC m=+0.130627611 container exec_died 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 24 21:57:08 compute-0 systemd[1]: libpod-conmon-5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe.scope: Deactivated successfully.
Nov 24 21:57:08 compute-0 sudo[208071]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:08 compute-0 sudo[208253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivcgewygvdbvycceiiwbgnfyaudjbxbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021428.455439-933-92376835507430/AnsiballZ_file.py'
Nov 24 21:57:08 compute-0 sudo[208253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:09 compute-0 python3.9[208255]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:09 compute-0 sudo[208253]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:09 compute-0 sudo[208405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbsxwfxrsqdhpbdjtfppozutngfjbyjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021429.312984-942-197282011717951/AnsiballZ_podman_container_info.py'
Nov 24 21:57:09 compute-0 sudo[208405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:09 compute-0 python3.9[208407]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 24 21:57:10 compute-0 sudo[208405]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:10 compute-0 sudo[208570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyscedhtwqylmhkzoqfxywddpdvbeblr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021430.2757492-950-45994457907426/AnsiballZ_podman_container_exec.py'
Nov 24 21:57:10 compute-0 sudo[208570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:10 compute-0 python3.9[208572]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:57:10 compute-0 systemd[1]: Started libpod-conmon-a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d.scope.
Nov 24 21:57:10 compute-0 podman[208573]: 2025-11-24 21:57:10.990205466 +0000 UTC m=+0.089936699 container exec a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 24 21:57:10 compute-0 podman[208573]: 2025-11-24 21:57:10.996119852 +0000 UTC m=+0.095851105 container exec_died a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 24 21:57:11 compute-0 sudo[208570]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:11 compute-0 systemd[1]: libpod-conmon-a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d.scope: Deactivated successfully.
Nov 24 21:57:11 compute-0 podman[208702]: 2025-11-24 21:57:11.532420733 +0000 UTC m=+0.081395684 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 21:57:11 compute-0 sudo[208775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muivzziapezqsahwafmhkxgwovnchkss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021431.2612503-958-149684549774515/AnsiballZ_podman_container_exec.py'
Nov 24 21:57:11 compute-0 sudo[208775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:11 compute-0 python3.9[208777]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:57:11 compute-0 systemd[1]: Started libpod-conmon-a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d.scope.
Nov 24 21:57:11 compute-0 podman[208778]: 2025-11-24 21:57:11.981500798 +0000 UTC m=+0.092663861 container exec a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Nov 24 21:57:11 compute-0 podman[208778]: 2025-11-24 21:57:11.991798889 +0000 UTC m=+0.102961922 container exec_died a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Nov 24 21:57:12 compute-0 sudo[208775]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:12 compute-0 systemd[1]: libpod-conmon-a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d.scope: Deactivated successfully.
Nov 24 21:57:12 compute-0 sudo[208957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjntdqupoidqxxdsaislppnkolihiein ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021432.2748735-966-112643481706669/AnsiballZ_file.py'
Nov 24 21:57:12 compute-0 sudo[208957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:12 compute-0 python3.9[208959]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:12 compute-0 sudo[208957]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:13 compute-0 sudo[209109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwmrmpebhavstlzqrodpwyblymjncrzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021433.061835-975-54563118690749/AnsiballZ_podman_container_info.py'
Nov 24 21:57:13 compute-0 sudo[209109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:13 compute-0 python3.9[209111]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 24 21:57:13 compute-0 sudo[209109]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:14 compute-0 sudo[209275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djhvswsluehwoaowixaxadsbtjnsyidu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021433.9588938-983-456137508228/AnsiballZ_podman_container_exec.py'
Nov 24 21:57:14 compute-0 sudo[209275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:14 compute-0 python3.9[209277]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:57:14 compute-0 systemd[1]: Started libpod-conmon-c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0.scope.
Nov 24 21:57:14 compute-0 podman[209278]: 2025-11-24 21:57:14.796970583 +0000 UTC m=+0.110047108 container exec c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 21:57:14 compute-0 podman[209278]: 2025-11-24 21:57:14.839790721 +0000 UTC m=+0.152867206 container exec_died c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 21:57:14 compute-0 systemd[1]: libpod-conmon-c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0.scope: Deactivated successfully.
Nov 24 21:57:14 compute-0 sudo[209275]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:15 compute-0 podman[209433]: 2025-11-24 21:57:15.530593701 +0000 UTC m=+0.084990989 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 21:57:15 compute-0 sudo[209478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdjzzetipgqkrlrdygdsupenfhylgvul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021435.1773396-991-254565097492163/AnsiballZ_podman_container_exec.py'
Nov 24 21:57:15 compute-0 sudo[209478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:15 compute-0 python3.9[209481]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:57:15 compute-0 systemd[1]: Started libpod-conmon-c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0.scope.
Nov 24 21:57:15 compute-0 podman[209482]: 2025-11-24 21:57:15.930495011 +0000 UTC m=+0.137991835 container exec c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 21:57:15 compute-0 podman[209482]: 2025-11-24 21:57:15.964476155 +0000 UTC m=+0.171972899 container exec_died c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 24 21:57:16 compute-0 systemd[1]: libpod-conmon-c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0.scope: Deactivated successfully.
Nov 24 21:57:16 compute-0 sudo[209478]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:16 compute-0 sudo[209664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwdincazknxgjeddjzhtxegwzgcwpwzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021436.1935935-999-244575018478314/AnsiballZ_file.py'
Nov 24 21:57:16 compute-0 sudo[209664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:16 compute-0 python3.9[209666]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:16 compute-0 sudo[209664]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:17 compute-0 sudo[209816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzvrdlayzgsarbtfhiolnwuaclagthng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021437.0547376-1008-121993145509831/AnsiballZ_podman_container_info.py'
Nov 24 21:57:17 compute-0 sudo[209816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:17 compute-0 python3.9[209818]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 24 21:57:17 compute-0 sudo[209816]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:18 compute-0 sudo[209998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjwcqsdniumqpqtuoqynxohjrwqduqkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021438.0487783-1016-168428098444357/AnsiballZ_podman_container_exec.py'
Nov 24 21:57:18 compute-0 sudo[209998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:18 compute-0 podman[209955]: 2025-11-24 21:57:18.485385745 +0000 UTC m=+0.111351234 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute)
Nov 24 21:57:18 compute-0 python3.9[210003]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:57:18 compute-0 systemd[1]: Started libpod-conmon-9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27.scope.
Nov 24 21:57:18 compute-0 podman[210004]: 2025-11-24 21:57:18.837767673 +0000 UTC m=+0.102765897 container exec 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 21:57:18 compute-0 podman[210004]: 2025-11-24 21:57:18.872782965 +0000 UTC m=+0.137781189 container exec_died 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 21:57:18 compute-0 systemd[1]: libpod-conmon-9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27.scope: Deactivated successfully.
Nov 24 21:57:18 compute-0 sudo[209998]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:19 compute-0 sudo[210184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gujaqtwuzxgwcqlrbyczlalaunouwwux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021439.1175063-1024-24821913627359/AnsiballZ_podman_container_exec.py'
Nov 24 21:57:19 compute-0 sudo[210184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:19 compute-0 python3.9[210186]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:57:19 compute-0 systemd[1]: Started libpod-conmon-9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27.scope.
Nov 24 21:57:19 compute-0 podman[210187]: 2025-11-24 21:57:19.880710875 +0000 UTC m=+0.107031970 container exec 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 21:57:19 compute-0 podman[210187]: 2025-11-24 21:57:19.910815897 +0000 UTC m=+0.137137002 container exec_died 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 21:57:19 compute-0 systemd[1]: libpod-conmon-9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27.scope: Deactivated successfully.
Nov 24 21:57:19 compute-0 sudo[210184]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:20 compute-0 sudo[210368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqjllvsaidphscsxxglpvzspdiacbszp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021440.2158413-1032-229747148058566/AnsiballZ_file.py'
Nov 24 21:57:20 compute-0 sudo[210368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:20 compute-0 python3.9[210370]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:20 compute-0 sudo[210368]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:21 compute-0 sudo[210520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khdxtvxqqjiiaqrnwcnqxtadgemkimwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021440.9891243-1041-165552606307260/AnsiballZ_podman_container_info.py'
Nov 24 21:57:21 compute-0 sudo[210520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:21 compute-0 python3.9[210522]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 24 21:57:21 compute-0 sudo[210520]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:22 compute-0 sudo[210686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krnwlxwahkycifofbeksdyphwyxexkvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021441.806808-1049-126431532740875/AnsiballZ_podman_container_exec.py'
Nov 24 21:57:22 compute-0 sudo[210686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:22 compute-0 python3.9[210688]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:57:22 compute-0 systemd[1]: Started libpod-conmon-366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a.scope.
Nov 24 21:57:22 compute-0 podman[210689]: 2025-11-24 21:57:22.524686345 +0000 UTC m=+0.103665871 container exec 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, architecture=x86_64, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41)
Nov 24 21:57:22 compute-0 podman[210689]: 2025-11-24 21:57:22.558844223 +0000 UTC m=+0.137823689 container exec_died 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, config_id=edpm, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Nov 24 21:57:22 compute-0 systemd[1]: libpod-conmon-366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a.scope: Deactivated successfully.
Nov 24 21:57:22 compute-0 sudo[210686]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:23 compute-0 sudo[210871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdnoutpiojvioebudcotgarkghrdmrcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021442.8372235-1057-64871914566027/AnsiballZ_podman_container_exec.py'
Nov 24 21:57:23 compute-0 sudo[210871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:23 compute-0 python3.9[210873]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:57:23 compute-0 systemd[1]: Started libpod-conmon-366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a.scope.
Nov 24 21:57:23 compute-0 podman[210874]: 2025-11-24 21:57:23.534547245 +0000 UTC m=+0.106276379 container exec 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, build-date=2025-08-20T13:12:41, release=1755695350, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc.)
Nov 24 21:57:23 compute-0 podman[210874]: 2025-11-24 21:57:23.567899473 +0000 UTC m=+0.139628537 container exec_died 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 21:57:23 compute-0 systemd[1]: libpod-conmon-366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a.scope: Deactivated successfully.
Nov 24 21:57:23 compute-0 sudo[210871]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:24 compute-0 sudo[211056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uatuezhuolgtmoitykqdvbdbzeeyyysj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021444.0452619-1065-51246602149538/AnsiballZ_file.py'
Nov 24 21:57:24 compute-0 sudo[211056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:24 compute-0 python3.9[211058]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:24 compute-0 sudo[211056]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:25 compute-0 sudo[211208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuiyukyxvmxjihfcmqahygazsnpeutlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021445.0519326-1074-14628908187355/AnsiballZ_file.py'
Nov 24 21:57:25 compute-0 sudo[211208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:25 compute-0 python3.9[211210]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:25 compute-0 sudo[211208]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:26 compute-0 sudo[211372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bteqcpdzfbhfuplfczfrqwbiigavboxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021445.8177521-1082-158356909659292/AnsiballZ_stat.py'
Nov 24 21:57:26 compute-0 sudo[211372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:26 compute-0 podman[211334]: 2025-11-24 21:57:26.274619155 +0000 UTC m=+0.130382314 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, config_id=edpm)
Nov 24 21:57:26 compute-0 python3.9[211382]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:57:26 compute-0 sudo[211372]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:26 compute-0 sudo[211503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xolaesbqlfiqcobldqovkwsejkqyfwcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021445.8177521-1082-158356909659292/AnsiballZ_copy.py'
Nov 24 21:57:26 compute-0 sudo[211503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:27 compute-0 python3.9[211505]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021445.8177521-1082-158356909659292/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:27 compute-0 sudo[211503]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:27 compute-0 sudo[211655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byiyqibqgasbidkzzlcigunoawvujjum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021447.5245223-1098-233735855290678/AnsiballZ_file.py'
Nov 24 21:57:27 compute-0 sudo[211655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:28 compute-0 python3.9[211657]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:28 compute-0 sudo[211655]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:28 compute-0 sudo[211824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdgstdbhgpmrewxpdyvnsceqhxynbnoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021448.3934135-1106-258957209775596/AnsiballZ_stat.py'
Nov 24 21:57:28 compute-0 sudo[211824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:28 compute-0 podman[211781]: 2025-11-24 21:57:28.821807406 +0000 UTC m=+0.074276686 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 21:57:29 compute-0 python3.9[211833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:57:29 compute-0 sudo[211824]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:29 compute-0 sudo[211909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkspnygdjyypvcaagygyhrdpagiaamdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021448.3934135-1106-258957209775596/AnsiballZ_file.py'
Nov 24 21:57:29 compute-0 sudo[211909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:29 compute-0 python3.9[211911]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:29 compute-0 sudo[211909]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:30 compute-0 sudo[212061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgrtnpbuhcmonvlrpcprmdlnhyeabsgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021449.7133608-1118-136563502523920/AnsiballZ_stat.py'
Nov 24 21:57:30 compute-0 sudo[212061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:30 compute-0 python3.9[212063]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:57:30 compute-0 sudo[212061]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:30 compute-0 sudo[212139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvnxahctcwgqhtxspqqctamfpbjatpxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021449.7133608-1118-136563502523920/AnsiballZ_file.py'
Nov 24 21:57:30 compute-0 sudo[212139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:30 compute-0 python3.9[212141]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.l7ex695p recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:30 compute-0 sudo[212139]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:31 compute-0 sudo[212291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwgzvsfwunooayjfglwldhlinmueryvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021451.0900047-1130-78553258995918/AnsiballZ_stat.py'
Nov 24 21:57:31 compute-0 sudo[212291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:31 compute-0 python3.9[212293]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:57:31 compute-0 sudo[212291]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:32 compute-0 sudo[212369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afnjrykgzvprmrycxtmohvszcbgxebdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021451.0900047-1130-78553258995918/AnsiballZ_file.py'
Nov 24 21:57:32 compute-0 sudo[212369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:32 compute-0 podman[212372]: 2025-11-24 21:57:32.141254942 +0000 UTC m=+0.055213175 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:57:32 compute-0 podman[212371]: 2025-11-24 21:57:32.200615065 +0000 UTC m=+0.120577665 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 21:57:32 compute-0 python3.9[212378]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:32 compute-0 sudo[212369]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:32 compute-0 sudo[212562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndbfhblhzzmraiivdhmdhcdkbodcixel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021452.5617893-1143-77514336939800/AnsiballZ_command.py'
Nov 24 21:57:32 compute-0 sudo[212562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:33 compute-0 python3.9[212564]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:57:33 compute-0 sudo[212562]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:34 compute-0 sudo[212715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tztzlsizrprpoovsoqevrbmeizmqhrus ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021453.414516-1151-55692101444212/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 21:57:34 compute-0 sudo[212715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:34 compute-0 python3[212717]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 21:57:34 compute-0 sudo[212715]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:34 compute-0 sudo[212867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shjlafkhhwstjqantykvftetgmnkdyzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021454.474715-1159-156539022655905/AnsiballZ_stat.py'
Nov 24 21:57:34 compute-0 sudo[212867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:35 compute-0 python3.9[212869]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:57:35 compute-0 sudo[212867]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:35 compute-0 sudo[212945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tscabdpkfptufejkmfeicsbogypcacye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021454.474715-1159-156539022655905/AnsiballZ_file.py'
Nov 24 21:57:35 compute-0 sudo[212945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:35 compute-0 python3.9[212947]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:35 compute-0 sudo[212945]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:36 compute-0 sudo[213097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smprbrxwspbofdggnssypibcjvdocwgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021455.7631679-1171-207279600955786/AnsiballZ_stat.py'
Nov 24 21:57:36 compute-0 sudo[213097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:36 compute-0 python3.9[213099]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:57:36 compute-0 sudo[213097]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:36 compute-0 sudo[213175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jygvbsuyqqwojwurchliefayzhyffewi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021455.7631679-1171-207279600955786/AnsiballZ_file.py'
Nov 24 21:57:36 compute-0 sudo[213175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:36 compute-0 python3.9[213177]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:36 compute-0 sudo[213175]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:37 compute-0 sudo[213327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-relhfueuyhcevdrctlqzlnmrtiwiqatd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021457.1247633-1183-253260396138073/AnsiballZ_stat.py'
Nov 24 21:57:37 compute-0 sudo[213327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:37 compute-0 python3.9[213329]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:57:37 compute-0 sudo[213327]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:38 compute-0 sudo[213405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mywvcqlsroatstxiukqjjxibykqoisxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021457.1247633-1183-253260396138073/AnsiballZ_file.py'
Nov 24 21:57:38 compute-0 sudo[213405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:38 compute-0 python3.9[213407]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:38 compute-0 sudo[213405]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:38 compute-0 sudo[213557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udjogqkpajtdnvyaiudxxigbfnrqtrjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021458.4329772-1195-174540521648461/AnsiballZ_stat.py'
Nov 24 21:57:38 compute-0 sudo[213557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:39 compute-0 python3.9[213559]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:57:39 compute-0 sudo[213557]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:39 compute-0 sudo[213635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpimuwuglwdemiylvnhpjkdrjltaqfzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021458.4329772-1195-174540521648461/AnsiballZ_file.py'
Nov 24 21:57:39 compute-0 sudo[213635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:39 compute-0 python3.9[213637]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:39 compute-0 sudo[213635]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:40 compute-0 sudo[213787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mflpmjcssjtzakphnbrpofxpvcfussrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021459.7735422-1207-256038502890225/AnsiballZ_stat.py'
Nov 24 21:57:40 compute-0 sudo[213787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:40 compute-0 python3.9[213789]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:57:40 compute-0 sudo[213787]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:40 compute-0 sudo[213912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbeceqonnsnrvhwikitwvrwgtqjeoptx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021459.7735422-1207-256038502890225/AnsiballZ_copy.py'
Nov 24 21:57:40 compute-0 sudo[213912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:41 compute-0 python3.9[213914]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021459.7735422-1207-256038502890225/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:41 compute-0 sudo[213912]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:41 compute-0 sudo[214077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmcpepzzhicisiczfkcpmhyteggqezmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021461.3758457-1222-129668490964483/AnsiballZ_file.py'
Nov 24 21:57:41 compute-0 sudo[214077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:41 compute-0 podman[214038]: 2025-11-24 21:57:41.802884087 +0000 UTC m=+0.092282492 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 21:57:42 compute-0 python3.9[214090]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:42 compute-0 sudo[214077]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:42 compute-0 sudo[214240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foktbmplnmzxpjvmtablsvmkciwfhble ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021462.256078-1230-164887071062330/AnsiballZ_command.py'
Nov 24 21:57:42 compute-0 sudo[214240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:42 compute-0 python3.9[214242]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:57:42 compute-0 sudo[214240]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:43 compute-0 sudo[214395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsvjpekfglibvjuwfflvbwwudlgwwwno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021463.2178807-1238-116739498581236/AnsiballZ_blockinfile.py'
Nov 24 21:57:43 compute-0 sudo[214395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:43 compute-0 python3.9[214397]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:44 compute-0 sudo[214395]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:44 compute-0 sudo[214547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmeqzcbxfodllcnfhgdyvclwfrdbusch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021464.2771504-1247-144148283093909/AnsiballZ_command.py'
Nov 24 21:57:44 compute-0 sudo[214547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:44 compute-0 python3.9[214549]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:57:44 compute-0 sudo[214547]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:45 compute-0 sudo[214700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfyciduuinfgmrbeigwkmbiwgjvlrwig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021465.0971112-1255-182495749769518/AnsiballZ_stat.py'
Nov 24 21:57:45 compute-0 sudo[214700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:45 compute-0 python3.9[214702]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:57:45 compute-0 sudo[214700]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:46 compute-0 sudo[214870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uinhythhbfwhxwsnwlpkoflcnlqqmubr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021465.9070587-1263-272731080005544/AnsiballZ_command.py'
Nov 24 21:57:46 compute-0 sudo[214870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:46 compute-0 podman[214828]: 2025-11-24 21:57:46.339461471 +0000 UTC m=+0.080693026 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:57:46 compute-0 python3.9[214876]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:57:46 compute-0 sudo[214870]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:47 compute-0 sudo[215030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhsyjzyvtcswrffyccmfaokmxxcyhmrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021466.778423-1271-28945720960334/AnsiballZ_file.py'
Nov 24 21:57:47 compute-0 sudo[215030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:47 compute-0 python3.9[215032]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:57:47 compute-0 sudo[215030]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:47 compute-0 sshd-session[189914]: Connection closed by 192.168.122.30 port 39532
Nov 24 21:57:47 compute-0 sshd-session[189911]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:57:47 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Nov 24 21:57:47 compute-0 systemd[1]: session-26.scope: Consumed 1min 59.719s CPU time.
Nov 24 21:57:47 compute-0 systemd-logind[806]: Session 26 logged out. Waiting for processes to exit.
Nov 24 21:57:47 compute-0 systemd-logind[806]: Removed session 26.
Nov 24 21:57:49 compute-0 podman[215057]: 2025-11-24 21:57:49.512671146 +0000 UTC m=+0.075812307 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 24 21:57:53 compute-0 sshd-session[215079]: Accepted publickey for zuul from 192.168.122.30 port 36452 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 21:57:53 compute-0 systemd-logind[806]: New session 27 of user zuul.
Nov 24 21:57:53 compute-0 systemd[1]: Started Session 27 of User zuul.
Nov 24 21:57:53 compute-0 sshd-session[215079]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:57:54 compute-0 sudo[215233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzxwsgsgtezgztxsfqelhuujuziczlvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021473.7067597-24-163982290280352/AnsiballZ_systemd_service.py'
Nov 24 21:57:54 compute-0 sudo[215233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:57:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:57:54.547 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:57:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:57:54.548 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:57:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:57:54.548 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:57:54 compute-0 python3.9[215235]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:57:54 compute-0 systemd[1]: Reloading.
Nov 24 21:57:54 compute-0 systemd-sysv-generator[215266]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:57:54 compute-0 systemd-rc-local-generator[215263]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:57:55 compute-0 sudo[215233]: pam_unix(sudo:session): session closed for user root
Nov 24 21:57:56 compute-0 python3.9[215420]: ansible-ansible.builtin.service_facts Invoked
Nov 24 21:57:56 compute-0 network[215437]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 21:57:56 compute-0 network[215438]: 'network-scripts' will be removed from distribution in near future.
Nov 24 21:57:56 compute-0 network[215439]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 21:57:56 compute-0 podman[215444]: 2025-11-24 21:57:56.454786895 +0000 UTC m=+0.106788097 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6)
Nov 24 21:57:59 compute-0 podman[215494]: 2025-11-24 21:57:59.374903128 +0000 UTC m=+0.080486240 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 21:57:59 compute-0 podman[203795]: time="2025-11-24T21:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 21:57:59 compute-0 podman[203795]: @ - - [24/Nov/2025:21:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22543 "" "Go-http-client/1.1"
Nov 24 21:57:59 compute-0 podman[203795]: @ - - [24/Nov/2025:21:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3414 "" "Go-http-client/1.1"
Nov 24 21:58:01 compute-0 openstack_network_exporter[205945]: ERROR   21:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 21:58:01 compute-0 openstack_network_exporter[205945]: ERROR   21:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 21:58:01 compute-0 openstack_network_exporter[205945]: ERROR   21:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 21:58:01 compute-0 openstack_network_exporter[205945]: ERROR   21:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 21:58:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 21:58:01 compute-0 openstack_network_exporter[205945]: ERROR   21:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 21:58:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 21:58:02 compute-0 sudo[215792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kigcloydhuqwstmpdstgqcmhwupxxzxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021481.9998746-47-231635323754820/AnsiballZ_systemd_service.py'
Nov 24 21:58:02 compute-0 sudo[215792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:02 compute-0 podman[215738]: 2025-11-24 21:58:02.47757466 +0000 UTC m=+0.102266048 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 21:58:02 compute-0 podman[215737]: 2025-11-24 21:58:02.510794581 +0000 UTC m=+0.139170423 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 21:58:02 compute-0 python3.9[215804]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:58:02 compute-0 sudo[215792]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:03 compute-0 sudo[215960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akisbejkpmjntqyykyympvdcbzgfviwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021483.1406586-57-107237981463792/AnsiballZ_file.py'
Nov 24 21:58:03 compute-0 sudo[215960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:03 compute-0 python3.9[215962]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:03 compute-0 sudo[215960]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:04 compute-0 nova_compute[189608]: 2025-11-24 21:58:04.212 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:58:04 compute-0 nova_compute[189608]: 2025-11-24 21:58:04.213 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:58:04 compute-0 nova_compute[189608]: 2025-11-24 21:58:04.214 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:58:04 compute-0 sudo[216112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljszqanktfadzeigddfvbpqrnpysmxil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021484.1294603-65-187525654264575/AnsiballZ_file.py'
Nov 24 21:58:04 compute-0 sudo[216112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:04 compute-0 python3.9[216114]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:04 compute-0 sudo[216112]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:04 compute-0 nova_compute[189608]: 2025-11-24 21:58:04.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:58:04 compute-0 nova_compute[189608]: 2025-11-24 21:58:04.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 21:58:04 compute-0 nova_compute[189608]: 2025-11-24 21:58:04.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 21:58:04 compute-0 nova_compute[189608]: 2025-11-24 21:58:04.805 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 21:58:04 compute-0 nova_compute[189608]: 2025-11-24 21:58:04.805 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:58:04 compute-0 nova_compute[189608]: 2025-11-24 21:58:04.805 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:58:05 compute-0 sudo[216264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgihbxdznscxnqvwwabjhbcsjykgkpwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021485.0192087-74-234621243688406/AnsiballZ_command.py'
Nov 24 21:58:05 compute-0 sudo[216264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:05 compute-0 python3.9[216266]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:58:05 compute-0 sudo[216264]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:06 compute-0 python3.9[216418]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 21:58:06 compute-0 nova_compute[189608]: 2025-11-24 21:58:06.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:58:06 compute-0 nova_compute[189608]: 2025-11-24 21:58:06.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:58:06 compute-0 nova_compute[189608]: 2025-11-24 21:58:06.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 21:58:06 compute-0 nova_compute[189608]: 2025-11-24 21:58:06.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:58:06 compute-0 nova_compute[189608]: 2025-11-24 21:58:06.824 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:58:06 compute-0 nova_compute[189608]: 2025-11-24 21:58:06.825 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:58:06 compute-0 nova_compute[189608]: 2025-11-24 21:58:06.825 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:58:06 compute-0 nova_compute[189608]: 2025-11-24 21:58:06.825 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 21:58:06 compute-0 nova_compute[189608]: 2025-11-24 21:58:06.971 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 21:58:06 compute-0 nova_compute[189608]: 2025-11-24 21:58:06.972 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5879MB free_disk=72.2616081237793GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 21:58:06 compute-0 nova_compute[189608]: 2025-11-24 21:58:06.973 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:58:06 compute-0 nova_compute[189608]: 2025-11-24 21:58:06.973 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:58:07 compute-0 nova_compute[189608]: 2025-11-24 21:58:07.026 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 21:58:07 compute-0 nova_compute[189608]: 2025-11-24 21:58:07.027 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 21:58:07 compute-0 nova_compute[189608]: 2025-11-24 21:58:07.044 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 21:58:07 compute-0 nova_compute[189608]: 2025-11-24 21:58:07.055 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 21:58:07 compute-0 nova_compute[189608]: 2025-11-24 21:58:07.057 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 21:58:07 compute-0 nova_compute[189608]: 2025-11-24 21:58:07.057 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:58:07 compute-0 sudo[216568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyrzunrtkxkirtvmkivbqmlhlrlmmyul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021487.0036497-92-244844408291314/AnsiballZ_systemd_service.py'
Nov 24 21:58:07 compute-0 sudo[216568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:07 compute-0 python3.9[216570]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:58:07 compute-0 systemd[1]: Reloading.
Nov 24 21:58:07 compute-0 systemd-rc-local-generator[216597]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:58:07 compute-0 systemd-sysv-generator[216600]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:58:07 compute-0 sudo[216568]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:08 compute-0 sudo[216754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlznwpxajuwclfsonemqruqfwxawchdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021488.2074213-100-159722335907016/AnsiballZ_command.py'
Nov 24 21:58:08 compute-0 sudo[216754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:08 compute-0 python3.9[216756]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:58:08 compute-0 sudo[216754]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:09 compute-0 sudo[216907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkavlzjzajotsxezvmhasejcsetgvgpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021489.095539-109-218694511475454/AnsiballZ_file.py'
Nov 24 21:58:09 compute-0 sudo[216907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:09 compute-0 python3.9[216909]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:58:09 compute-0 sudo[216907]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:10 compute-0 python3.9[217059]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:58:11 compute-0 python3.9[217211]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:12 compute-0 podman[217306]: 2025-11-24 21:58:12.330415163 +0000 UTC m=+0.098821300 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 21:58:12 compute-0 python3.9[217338]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021491.0262492-125-280477987844218/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:58:13 compute-0 sudo[217507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyraspnjwpmsqmhwowpnifovnajmgnxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021492.929157-143-162309114782910/AnsiballZ_getent.py'
Nov 24 21:58:13 compute-0 sudo[217507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:13 compute-0 python3.9[217509]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 24 21:58:13 compute-0 sudo[217507]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:14 compute-0 sshd-session[217535]: Invalid user solana from 45.148.10.240 port 35068
Nov 24 21:58:14 compute-0 sshd-session[217535]: Connection closed by invalid user solana 45.148.10.240 port 35068 [preauth]
Nov 24 21:58:14 compute-0 python3.9[217662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:15 compute-0 python3.9[217783]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764021494.4526064-171-217358669124467/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:16 compute-0 python3.9[217933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:16 compute-0 podman[217934]: 2025-11-24 21:58:16.565583704 +0000 UTC m=+0.111707470 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 24 21:58:17 compute-0 python3.9[218074]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764021495.773932-171-158546332327610/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.615 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.617 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.617 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.618 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.620 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.620 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.621 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.621 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.622 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.622 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.623 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.623 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.624 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.624 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.625 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559e98b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.625 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.625 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.625 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.625 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 21:58:17.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 21:58:17 compute-0 python3.9[218224]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:18 compute-0 python3.9[218346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764021497.2637143-171-27341700526227/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:19 compute-0 python3.9[218496]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:58:19 compute-0 podman[218622]: 2025-11-24 21:58:19.946089034 +0000 UTC m=+0.073364970 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 24 21:58:20 compute-0 python3.9[218660]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:58:20 compute-0 python3.9[218819]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:21 compute-0 python3.9[218940]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021500.3458009-230-149962213626246/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:22 compute-0 python3.9[219090]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:22 compute-0 python3.9[219166]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:23 compute-0 python3.9[219316]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:24 compute-0 python3.9[219437]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021503.0530236-230-175018684571539/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:25 compute-0 python3.9[219587]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:25 compute-0 python3.9[219708]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021504.4758413-230-57825294057951/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:26 compute-0 python3.9[219858]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:27 compute-0 python3.9[219979]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021505.953626-230-105438812271312/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:27 compute-0 podman[220057]: 2025-11-24 21:58:27.508764117 +0000 UTC m=+0.069444547 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-type=git)
Nov 24 21:58:27 compute-0 python3.9[220150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:28 compute-0 python3.9[220271]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021507.2812746-230-265535838488742/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:29 compute-0 python3.9[220421]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:29 compute-0 podman[220422]: 2025-11-24 21:58:29.512441041 +0000 UTC m=+0.092301797 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 21:58:29 compute-0 podman[203795]: time="2025-11-24T21:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 21:58:29 compute-0 podman[203795]: @ - - [24/Nov/2025:21:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22543 "" "Go-http-client/1.1"
Nov 24 21:58:29 compute-0 podman[203795]: @ - - [24/Nov/2025:21:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3420 "" "Go-http-client/1.1"
Nov 24 21:58:29 compute-0 python3.9[220523]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:30 compute-0 sudo[220676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfcfykzoffwnygvbbztyhxxpeekovgiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021510.2035372-325-111902320751550/AnsiballZ_file.py'
Nov 24 21:58:30 compute-0 sudo[220676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:30 compute-0 python3.9[220678]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:30 compute-0 sudo[220676]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:31 compute-0 sudo[220828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klqrorstencmirszovogtmpqpzkvdyjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021511.019988-333-52437613833220/AnsiballZ_file.py'
Nov 24 21:58:31 compute-0 sudo[220828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:31 compute-0 openstack_network_exporter[205945]: ERROR   21:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 21:58:31 compute-0 openstack_network_exporter[205945]: ERROR   21:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 21:58:31 compute-0 openstack_network_exporter[205945]: ERROR   21:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 21:58:31 compute-0 openstack_network_exporter[205945]: ERROR   21:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 21:58:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 21:58:31 compute-0 openstack_network_exporter[205945]: ERROR   21:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 21:58:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 21:58:31 compute-0 python3.9[220830]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:31 compute-0 sudo[220828]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:32 compute-0 sudo[220980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hixxvdegzcoewofgfznitaqohqpskhbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021511.8132753-341-82594620764056/AnsiballZ_file.py'
Nov 24 21:58:32 compute-0 sudo[220980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:32 compute-0 python3.9[220982]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:58:32 compute-0 sudo[220980]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:33 compute-0 sudo[221157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-namvwkuzelbjlwdclhlqweljdncpeint ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021512.692555-349-19108352128674/AnsiballZ_stat.py'
Nov 24 21:58:33 compute-0 sudo[221157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:33 compute-0 podman[221107]: 2025-11-24 21:58:33.165214495 +0000 UTC m=+0.102618477 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 21:58:33 compute-0 podman[221106]: 2025-11-24 21:58:33.180409048 +0000 UTC m=+0.125365205 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:58:33 compute-0 python3.9[221170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:33 compute-0 sudo[221157]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:33 compute-0 sudo[221297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydjtohzwskevyrcynltqriolwfnjyhef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021512.692555-349-19108352128674/AnsiballZ_copy.py'
Nov 24 21:58:33 compute-0 sudo[221297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:34 compute-0 python3.9[221299]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021512.692555-349-19108352128674/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:58:34 compute-0 sudo[221297]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:34 compute-0 sudo[221373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjewqcqwfnicmdczlpcrahuexdnnxzsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021512.692555-349-19108352128674/AnsiballZ_stat.py'
Nov 24 21:58:34 compute-0 sudo[221373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:34 compute-0 python3.9[221375]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:34 compute-0 sudo[221373]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:35 compute-0 sudo[221496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kghhtcxkvxrvoleegqasovnkdfhfcwtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021512.692555-349-19108352128674/AnsiballZ_copy.py'
Nov 24 21:58:35 compute-0 sudo[221496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:35 compute-0 python3.9[221498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021512.692555-349-19108352128674/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:58:35 compute-0 sudo[221496]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:35 compute-0 sudo[221648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvldzrrwefjposdxeegnsmnsgjutodkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021515.5034604-349-52855370594847/AnsiballZ_stat.py'
Nov 24 21:58:35 compute-0 sudo[221648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:36 compute-0 python3.9[221650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:58:36 compute-0 sudo[221648]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:36 compute-0 sudo[221771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrewqplonuspbfapxkjjldezevynppwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021515.5034604-349-52855370594847/AnsiballZ_copy.py'
Nov 24 21:58:36 compute-0 sudo[221771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:36 compute-0 python3.9[221773]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764021515.5034604-349-52855370594847/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 21:58:36 compute-0 sudo[221771]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:37 compute-0 sudo[221923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huqczwweyocehpkammitzpnnwxpyolle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021517.1976876-391-144834311974212/AnsiballZ_container_config_data.py'
Nov 24 21:58:37 compute-0 sudo[221923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:37 compute-0 python3.9[221925]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Nov 24 21:58:37 compute-0 sudo[221923]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:38 compute-0 sudo[222075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngzmjddupvlrmryqrczdvuqdgyqrftro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021518.2453108-400-117370219126509/AnsiballZ_container_config_hash.py'
Nov 24 21:58:38 compute-0 sudo[222075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:38 compute-0 python3.9[222077]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 21:58:38 compute-0 sudo[222075]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:39 compute-0 sudo[222227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjfctutlzhubwtkfuudykxbsnyluceey ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021519.2663412-410-185094064470985/AnsiballZ_edpm_container_manage.py'
Nov 24 21:58:39 compute-0 sudo[222227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:40 compute-0 python3[222229]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 21:58:40 compute-0 podman[222266]: 2025-11-24 21:58:40.415711975 +0000 UTC m=+0.068232910 container create a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.license=GPLv2)
Nov 24 21:58:40 compute-0 podman[222266]: 2025-11-24 21:58:40.377634103 +0000 UTC m=+0.030155118 image pull 02e0056780c6b31017996766cd13000137ba644dac3fc851da034db8cf4ceb2c quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 24 21:58:40 compute-0 python3[222229]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Nov 24 21:58:40 compute-0 sudo[222227]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:41 compute-0 sudo[222453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybumaoewocxuhsxtrpmydxogvbsnztkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021520.856025-418-133728364741148/AnsiballZ_stat.py'
Nov 24 21:58:41 compute-0 sudo[222453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:41 compute-0 python3.9[222455]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:58:41 compute-0 sudo[222453]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:42 compute-0 sudo[222608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygtyvusexjtekyduqvvqqremtaufdtxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021521.7909606-427-63943230373196/AnsiballZ_file.py'
Nov 24 21:58:42 compute-0 sudo[222608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:42 compute-0 python3.9[222610]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:42 compute-0 sudo[222608]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:42 compute-0 podman[222613]: 2025-11-24 21:58:42.520446466 +0000 UTC m=+0.074419812 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 21:58:43 compute-0 sudo[222785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abwzrjzpbplgqppwsbrgqlvfnsfdbint ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021522.4659204-427-126605008851294/AnsiballZ_copy.py'
Nov 24 21:58:43 compute-0 sudo[222785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:43 compute-0 python3.9[222787]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764021522.4659204-427-126605008851294/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:43 compute-0 sudo[222785]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:43 compute-0 sshd-session[222456]: Connection closed by authenticating user root 185.156.73.233 port 50826 [preauth]
Nov 24 21:58:43 compute-0 sudo[222861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxnmrlyepvrhymiwqjwbnzmazqboxciu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021522.4659204-427-126605008851294/AnsiballZ_systemd.py'
Nov 24 21:58:43 compute-0 sudo[222861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:44 compute-0 python3.9[222863]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:58:44 compute-0 systemd[1]: Reloading.
Nov 24 21:58:44 compute-0 systemd-rc-local-generator[222887]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:58:44 compute-0 systemd-sysv-generator[222891]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:58:44 compute-0 sudo[222861]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:44 compute-0 sudo[222971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgzxbtxncxghagklhpgprtirppeyqkvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021522.4659204-427-126605008851294/AnsiballZ_systemd.py'
Nov 24 21:58:44 compute-0 sudo[222971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:45 compute-0 python3.9[222973]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:58:45 compute-0 systemd[1]: Reloading.
Nov 24 21:58:45 compute-0 systemd-sysv-generator[223006]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:58:45 compute-0 systemd-rc-local-generator[223002]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:58:45 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 24 21:58:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fb7df1ae720ebc1a680a99bf4038a5b2d969f8960a8a80dd177e0398ea21a4/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 21:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fb7df1ae720ebc1a680a99bf4038a5b2d969f8960a8a80dd177e0398ea21a4/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 24 21:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fb7df1ae720ebc1a680a99bf4038a5b2d969f8960a8a80dd177e0398ea21a4/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 24 21:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fb7df1ae720ebc1a680a99bf4038a5b2d969f8960a8a80dd177e0398ea21a4/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 24 21:58:45 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846.
Nov 24 21:58:45 compute-0 podman[223013]: 2025-11-24 21:58:45.851272744 +0000 UTC m=+0.180196868 container init a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: + sudo -E kolla_set_configs
Nov 24 21:58:45 compute-0 podman[223013]: 2025-11-24 21:58:45.887633163 +0000 UTC m=+0.216557257 container start a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:58:45 compute-0 podman[223013]: ceilometer_agent_ipmi
Nov 24 21:58:45 compute-0 sudo[223034]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 21:58:45 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 24 21:58:45 compute-0 sudo[223034]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 21:58:45 compute-0 sudo[223034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 24 21:58:45 compute-0 sudo[222971]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:45 compute-0 podman[223035]: 2025-11-24 21:58:45.955572482 +0000 UTC m=+0.047546738 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Nov 24 21:58:45 compute-0 systemd[1]: a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846-d018dac3d6659f8.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 21:58:45 compute-0 systemd[1]: a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846-d018dac3d6659f8.service: Failed with result 'exit-code'.
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Validating config file
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Copying service configuration files
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: INFO:__main__:Writing out command to execute
Nov 24 21:58:45 compute-0 sudo[223034]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: ++ cat /run_command
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: + ARGS=
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: + sudo kolla_copy_cacerts
Nov 24 21:58:45 compute-0 sudo[223069]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 21:58:45 compute-0 sudo[223069]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 21:58:45 compute-0 sudo[223069]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 24 21:58:45 compute-0 sudo[223069]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: + [[ ! -n '' ]]
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: + . kolla_extend_start
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: + umask 0022
Nov 24 21:58:45 compute-0 ceilometer_agent_ipmi[223028]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 24 21:58:46 compute-0 sudo[223210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slyavimyjoptbxvnanssisgwcrrawgap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021526.2501233-453-253629200237889/AnsiballZ_container_config_data.py'
Nov 24 21:58:46 compute-0 sudo[223210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:46 compute-0 podman[223212]: 2025-11-24 21:58:46.725685557 +0000 UTC m=+0.075708292 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.794 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.794 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.794 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.794 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.794 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.794 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.794 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.794 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.795 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.795 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.795 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.795 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.795 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.795 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.795 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.795 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.795 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.795 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.795 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.795 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.796 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.796 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.796 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.796 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.796 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.796 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.796 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.796 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.796 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.796 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.796 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.796 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.796 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.797 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.797 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.797 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.797 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.797 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.797 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.797 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.797 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.797 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.797 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.797 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.797 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.797 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.798 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.798 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.798 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.798 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.798 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.798 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.798 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.798 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.798 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.798 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.798 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.798 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.798 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.799 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.799 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.799 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.799 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.799 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.799 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.799 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.799 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.799 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.799 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.799 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.799 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.799 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.800 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.800 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.800 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.800 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.800 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.800 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.800 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.800 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.800 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.800 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.800 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.800 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.800 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.802 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.802 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.802 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.802 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.802 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.802 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.802 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.802 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.802 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.802 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.802 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.803 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.803 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.803 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.803 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.803 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.803 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.803 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.803 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.803 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.803 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.803 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.804 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.804 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.804 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.804 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.804 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.804 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.804 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.804 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.804 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.804 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.805 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.805 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.805 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.805 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.805 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.805 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.805 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.805 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.805 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.805 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.805 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.805 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.806 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.806 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.806 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.806 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.806 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.806 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.806 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.806 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.806 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.806 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.806 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.806 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.807 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.807 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.807 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.807 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.807 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.807 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.807 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.807 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.807 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.807 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.807 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.807 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.827 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.828 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.829 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 24 21:58:46 compute-0 python3.9[223218]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Nov 24 21:58:46 compute-0 sudo[223210]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:46 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:46.915 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpkrvvfvjf/privsep.sock']
Nov 24 21:58:46 compute-0 sudo[223237]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpkrvvfvjf/privsep.sock
Nov 24 21:58:46 compute-0 sudo[223237]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 21:58:46 compute-0 sudo[223237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 24 21:58:47 compute-0 sudo[223391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snlxloyurxrglwhjevzuofiwrunpsxuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021527.1782339-462-237363747979891/AnsiballZ_container_config_hash.py'
Nov 24 21:58:47 compute-0 sudo[223391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:47 compute-0 sudo[223237]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.612 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.612 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpkrvvfvjf/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.476 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.480 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.483 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.483 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 24 21:58:47 compute-0 python3.9[223393]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.760 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.761 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.762 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.763 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.763 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.763 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.764 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.764 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.764 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.764 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.765 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.765 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.765 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 24 21:58:47 compute-0 sudo[223391]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.770 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.771 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.771 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.771 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.771 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.772 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.772 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.773 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.773 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.773 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.774 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.774 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.774 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.774 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.774 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.774 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.774 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.774 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.775 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.775 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.775 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.775 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.775 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.775 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.775 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.775 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.775 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.775 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.775 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.776 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.776 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.776 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.776 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.776 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.776 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.776 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.776 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.776 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.776 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.776 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.776 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.777 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.777 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.777 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.777 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.777 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.777 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.777 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.777 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.777 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.777 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.778 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.778 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.778 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.778 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.778 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.778 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.778 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.778 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.778 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.778 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.778 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.778 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.779 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.779 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.779 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.779 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.779 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.779 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.779 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.779 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.779 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.779 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.781 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.781 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.781 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.781 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.781 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.781 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.781 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.781 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.781 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.781 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.781 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.782 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.782 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.782 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.782 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.782 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.782 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.782 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.782 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.782 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.782 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.782 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.783 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.783 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.783 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.783 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.783 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.783 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.783 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.783 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.783 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.783 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.783 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.784 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.784 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.784 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.784 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.784 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.784 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.784 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.784 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.784 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.784 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.784 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.785 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.785 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.785 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.785 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.785 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.785 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.785 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.785 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.785 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.785 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.785 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.786 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.786 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.786 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.786 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.786 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.786 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.786 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.786 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.786 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.786 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.786 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.786 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.787 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.787 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.787 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.787 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.787 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.787 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.787 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.787 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.787 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.787 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.787 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.787 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.788 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.788 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.788 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.788 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.788 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.788 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.788 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.788 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.788 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.788 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.788 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.788 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.789 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.789 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.789 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.789 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.789 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.789 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.789 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.789 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.789 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.789 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.789 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.791 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.791 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.791 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.791 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.791 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.791 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 24 21:58:47 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:47.794 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 24 21:58:48 compute-0 sudo[223547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqtxsetnexppefibijiombsxtqircing ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021528.1162415-472-235945449795512/AnsiballZ_edpm_container_manage.py'
Nov 24 21:58:48 compute-0 sudo[223547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:48 compute-0 python3[223549]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 21:58:49 compute-0 podman[223587]: 2025-11-24 21:58:49.028744558 +0000 UTC m=+0.059782938 container create 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, vcs-type=git, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 24 21:58:49 compute-0 podman[223587]: 2025-11-24 21:58:48.998335993 +0000 UTC m=+0.029374413 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 24 21:58:49 compute-0 python3[223549]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Nov 24 21:58:49 compute-0 sudo[223547]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:49 compute-0 sudo[223774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfijnfbpwutwghzcygmvmtauufxdwcre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021529.4107602-480-73214178896363/AnsiballZ_stat.py'
Nov 24 21:58:49 compute-0 sudo[223774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:49 compute-0 python3.9[223776]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 21:58:50 compute-0 sudo[223774]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:50 compute-0 podman[223846]: 2025-11-24 21:58:50.539266216 +0000 UTC m=+0.087370784 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 24 21:58:50 compute-0 sudo[223950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkweviwjqskyvlqaaxroktlkmyicgixo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021530.368892-489-20062970029753/AnsiballZ_file.py'
Nov 24 21:58:50 compute-0 sudo[223950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:50 compute-0 python3.9[223952]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:50 compute-0 sudo[223950]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:51 compute-0 sudo[224101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugzwocjxpxfiwtvwnuqyrkodctfgchhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021531.0358038-489-253296841632605/AnsiballZ_copy.py'
Nov 24 21:58:51 compute-0 sudo[224101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:51 compute-0 python3.9[224103]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764021531.0358038-489-253296841632605/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:58:51 compute-0 sudo[224101]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:52 compute-0 sudo[224177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wljpwrjqfdtbobvxbeinvzxetnzfydig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021531.0358038-489-253296841632605/AnsiballZ_systemd.py'
Nov 24 21:58:52 compute-0 sudo[224177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:52 compute-0 python3.9[224179]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 21:58:52 compute-0 systemd[1]: Reloading.
Nov 24 21:58:52 compute-0 systemd-rc-local-generator[224208]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:58:52 compute-0 systemd-sysv-generator[224212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:58:52 compute-0 sudo[224177]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:53 compute-0 sudo[224289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjzpjiqodzegfippcevrpjvblmwpjhns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021531.0358038-489-253296841632605/AnsiballZ_systemd.py'
Nov 24 21:58:53 compute-0 sudo[224289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:53 compute-0 python3.9[224291]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 21:58:53 compute-0 systemd[1]: Reloading.
Nov 24 21:58:53 compute-0 systemd-rc-local-generator[224318]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:58:53 compute-0 systemd-sysv-generator[224324]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:58:53 compute-0 systemd[1]: Starting kepler container...
Nov 24 21:58:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:58:54 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db.
Nov 24 21:58:54 compute-0 podman[224331]: 2025-11-24 21:58:54.137476727 +0000 UTC m=+0.182861180 container init 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, vendor=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, name=ubi9, vcs-type=git, io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Nov 24 21:58:54 compute-0 kepler[224347]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 24 21:58:54 compute-0 podman[224331]: 2025-11-24 21:58:54.184066873 +0000 UTC m=+0.229451296 container start 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_id=edpm, io.openshift.expose-services=, vcs-type=git, container_name=kepler, release-0.7.12=, io.openshift.tags=base rhel9, name=ubi9, distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1214.1726694543)
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.189853       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.190264       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 24 21:58:54 compute-0 podman[224331]: kepler
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.190288       1 config.go:295] kernel version: 5.14
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.191229       1 power.go:78] Unable to obtain power, use estimate method
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.191265       1 redfish.go:169] failed to get redfish credential file path
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.191671       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.191683       1 power.go:79] using none to obtain power
Nov 24 21:58:54 compute-0 kepler[224347]: E1124 21:58:54.191700       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 24 21:58:54 compute-0 kepler[224347]: E1124 21:58:54.191727       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 24 21:58:54 compute-0 kepler[224347]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.193514       1 exporter.go:84] Number of CPUs: 8
Nov 24 21:58:54 compute-0 systemd[1]: Started kepler container.
Nov 24 21:58:54 compute-0 sudo[224289]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:54 compute-0 podman[224357]: 2025-11-24 21:58:54.304996999 +0000 UTC m=+0.100946466 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, io.openshift.expose-services=)
Nov 24 21:58:54 compute-0 systemd[1]: 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db-34e20c97d8bcfaa5.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 21:58:54 compute-0 systemd[1]: 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db-34e20c97d8bcfaa5.service: Failed with result 'exit-code'.
Nov 24 21:58:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:58:54.549 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:58:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:58:54.549 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:58:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:58:54.549 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.802749       1 watcher.go:83] Using in cluster k8s config
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.802808       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 24 21:58:54 compute-0 kepler[224347]: E1124 21:58:54.802926       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.814147       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.814214       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.822836       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.822891       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.838001       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.838069       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.838094       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.851260       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.851318       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.851327       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.851336       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.851398       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.851417       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.851543       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.851601       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.851633       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.851661       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.851881       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 24 21:58:54 compute-0 kepler[224347]: I1124 21:58:54.853238       1 exporter.go:208] Started Kepler in 664.199476ms
Nov 24 21:58:54 compute-0 sudo[224540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlepnqchxhcmwwsgappvjrmszbbhvues ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021534.4289403-513-172090287570227/AnsiballZ_systemd.py'
Nov 24 21:58:54 compute-0 sudo[224540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:55 compute-0 python3.9[224542]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:58:55 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Nov 24 21:58:55 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:55.332 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 24 21:58:55 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:55.434 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Nov 24 21:58:55 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:55.435 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Nov 24 21:58:55 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:55.435 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Nov 24 21:58:55 compute-0 ceilometer_agent_ipmi[223028]: 2025-11-24 21:58:55.449 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Nov 24 21:58:55 compute-0 systemd[1]: libpod-a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846.scope: Deactivated successfully.
Nov 24 21:58:55 compute-0 systemd[1]: libpod-a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846.scope: Consumed 2.219s CPU time.
Nov 24 21:58:55 compute-0 podman[224546]: 2025-11-24 21:58:55.674248561 +0000 UTC m=+0.396514256 container died a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 21:58:55 compute-0 systemd[1]: a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846-d018dac3d6659f8.timer: Deactivated successfully.
Nov 24 21:58:55 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846.
Nov 24 21:58:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846-userdata-shm.mount: Deactivated successfully.
Nov 24 21:58:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4fb7df1ae720ebc1a680a99bf4038a5b2d969f8960a8a80dd177e0398ea21a4-merged.mount: Deactivated successfully.
Nov 24 21:58:55 compute-0 podman[224546]: 2025-11-24 21:58:55.76570298 +0000 UTC m=+0.487968675 container cleanup a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 21:58:55 compute-0 podman[224546]: ceilometer_agent_ipmi
Nov 24 21:58:55 compute-0 podman[224574]: ceilometer_agent_ipmi
Nov 24 21:58:55 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Nov 24 21:58:55 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Nov 24 21:58:55 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 24 21:58:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fb7df1ae720ebc1a680a99bf4038a5b2d969f8960a8a80dd177e0398ea21a4/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 21:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fb7df1ae720ebc1a680a99bf4038a5b2d969f8960a8a80dd177e0398ea21a4/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 24 21:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fb7df1ae720ebc1a680a99bf4038a5b2d969f8960a8a80dd177e0398ea21a4/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 24 21:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fb7df1ae720ebc1a680a99bf4038a5b2d969f8960a8a80dd177e0398ea21a4/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 24 21:58:56 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846.
Nov 24 21:58:56 compute-0 podman[224583]: 2025-11-24 21:58:56.163461443 +0000 UTC m=+0.218383513 container init a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: + sudo -E kolla_set_configs
Nov 24 21:58:56 compute-0 podman[224583]: 2025-11-24 21:58:56.204194398 +0000 UTC m=+0.259116468 container start a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 21:58:56 compute-0 podman[224583]: ceilometer_agent_ipmi
Nov 24 21:58:56 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 24 21:58:56 compute-0 sudo[224606]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 21:58:56 compute-0 sudo[224606]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 21:58:56 compute-0 sudo[224606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 24 21:58:56 compute-0 sudo[224540]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Validating config file
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Copying service configuration files
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: INFO:__main__:Writing out command to execute
Nov 24 21:58:56 compute-0 sudo[224606]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: ++ cat /run_command
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: + ARGS=
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: + sudo kolla_copy_cacerts
Nov 24 21:58:56 compute-0 podman[224607]: 2025-11-24 21:58:56.341252544 +0000 UTC m=+0.120806752 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi)
Nov 24 21:58:56 compute-0 sudo[224633]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 21:58:56 compute-0 sudo[224633]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 21:58:56 compute-0 sudo[224633]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 24 21:58:56 compute-0 sudo[224633]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:56 compute-0 systemd[1]: a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846-32d66634f6b2ff22.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 21:58:56 compute-0 systemd[1]: a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846-32d66634f6b2ff22.service: Failed with result 'exit-code'.
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: + [[ ! -n '' ]]
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: + . kolla_extend_start
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: + umask 0022
Nov 24 21:58:56 compute-0 ceilometer_agent_ipmi[224600]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 24 21:58:57 compute-0 sudo[224781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woofexyiyjazpqjqnjtzodkehdznyedi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021536.5611043-521-2744275862342/AnsiballZ_systemd.py'
Nov 24 21:58:57 compute-0 sudo[224781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.166 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.166 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.166 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.166 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.167 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.167 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.167 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.167 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.167 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.167 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.167 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.167 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.167 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.167 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.167 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.167 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.168 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.168 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.168 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.168 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.168 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.168 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.168 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.168 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.168 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.168 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.168 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.168 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.169 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.169 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.169 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.169 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.169 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.169 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.170 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.170 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.170 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.170 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.170 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.170 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.170 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.170 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.170 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.170 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.171 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.171 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.171 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.171 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.171 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.171 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.171 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.171 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.171 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.171 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.171 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.171 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.172 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.172 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.172 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.172 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.172 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.172 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.172 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.172 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.172 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.172 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.172 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.176 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.176 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.176 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.176 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.176 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.176 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.176 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.181 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.204 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.205 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.207 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.231 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpo_l7xtpe/privsep.sock']
Nov 24 21:58:57 compute-0 sudo[224788]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpo_l7xtpe/privsep.sock
Nov 24 21:58:57 compute-0 sudo[224788]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 21:58:57 compute-0 sudo[224788]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 24 21:58:57 compute-0 python3.9[224783]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 21:58:57 compute-0 systemd[1]: Stopping kepler container...
Nov 24 21:58:57 compute-0 kepler[224347]: I1124 21:58:57.526609       1 exporter.go:218] Received shutdown signal
Nov 24 21:58:57 compute-0 kepler[224347]: I1124 21:58:57.527717       1 exporter.go:226] Exiting...
Nov 24 21:58:57 compute-0 systemd[1]: libpod-178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db.scope: Deactivated successfully.
Nov 24 21:58:57 compute-0 conmon[224347]: conmon 178df6a27828c3804fa2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db.scope/container/memory.events
Nov 24 21:58:57 compute-0 podman[224794]: 2025-11-24 21:58:57.73771055 +0000 UTC m=+0.284550748 container died 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, version=9.4, release=1214.1726694543, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=)
Nov 24 21:58:57 compute-0 systemd[1]: 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db-34e20c97d8bcfaa5.timer: Deactivated successfully.
Nov 24 21:58:57 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db.
Nov 24 21:58:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a242b0c2a90286043985d1205ae546e0ccec4f5796f4b17ed80200e004f2f82-merged.mount: Deactivated successfully.
Nov 24 21:58:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db-userdata-shm.mount: Deactivated successfully.
Nov 24 21:58:57 compute-0 podman[224794]: 2025-11-24 21:58:57.784482342 +0000 UTC m=+0.331322550 container cleanup 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, version=9.4, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, container_name=kepler, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 21:58:57 compute-0 podman[224794]: kepler
Nov 24 21:58:57 compute-0 podman[224810]: 2025-11-24 21:58:57.858103869 +0000 UTC m=+0.096934642 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Nov 24 21:58:57 compute-0 podman[224831]: kepler
Nov 24 21:58:57 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Nov 24 21:58:57 compute-0 systemd[1]: Stopped kepler container.
Nov 24 21:58:57 compute-0 systemd[1]: Starting kepler container...
Nov 24 21:58:57 compute-0 sudo[224788]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.949 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.950 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpo_l7xtpe/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.827 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.834 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.839 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 24 21:58:57 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:57.839 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 24 21:58:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:58:58 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db.
Nov 24 21:58:58 compute-0 podman[224851]: 2025-11-24 21:58:58.042101652 +0000 UTC m=+0.120705139 container init 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, version=9.4, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release=1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, config_id=edpm, release-0.7.12=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 24 21:58:58 compute-0 kepler[224868]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.070 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.070 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:58 compute-0 podman[224851]: 2025-11-24 21:58:58.071627319 +0000 UTC m=+0.150230736 container start 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, config_id=edpm, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.071 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.072 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.072 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.072 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.072 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.072 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.073 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.073 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.073 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.073 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.074 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 24 21:58:58 compute-0 podman[224851]: kepler
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.076611       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.076736       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.076759       1 config.go:295] kernel version: 5.14
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.077320       1 power.go:78] Unable to obtain power, use estimate method
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.077364       1 redfish.go:169] failed to get redfish credential file path
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.077671       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.077683       1 power.go:79] using none to obtain power
Nov 24 21:58:58 compute-0 kepler[224868]: E1124 21:58:58.077697       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 24 21:58:58 compute-0 kepler[224868]: E1124 21:58:58.077716       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.077 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.078 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.078 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.078 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.078 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 21:58:58 compute-0 kepler[224868]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.078 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.078 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.079 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.079237       1 exporter.go:84] Number of CPUs: 8
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.079 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.079 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.079 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.079 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.080 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.080 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.080 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.080 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.081 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.081 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.081 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 systemd[1]: Started kepler container.
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.081 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.081 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.082 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.082 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.082 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.082 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.083 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.083 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.083 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.083 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.083 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.083 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.084 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.084 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.084 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.084 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.084 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.084 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.084 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.085 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.085 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.085 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.085 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.085 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.086 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.086 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.086 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.086 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.086 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.086 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.087 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.087 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.087 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.087 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.087 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.087 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.087 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.088 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.088 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.088 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.088 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.088 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.088 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.088 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.088 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.089 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.089 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.089 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.089 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.089 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.089 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.089 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.089 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.092 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.092 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.092 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.092 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.092 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.092 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.092 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.092 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.093 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.093 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.093 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.093 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.093 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.093 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.093 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.094 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.094 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.094 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.094 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.094 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.094 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.094 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.095 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.095 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.095 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.095 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.095 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.095 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.095 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.095 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.095 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.096 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.096 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.096 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.096 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.096 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.096 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.096 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.097 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.097 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.097 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.097 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.097 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.097 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.097 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.098 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.098 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.098 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.098 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.098 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.098 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.098 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.098 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.099 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.099 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.099 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.099 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.099 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.099 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.099 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.100 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.100 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.100 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.100 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.100 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.100 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.101 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.101 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.101 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.101 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.101 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.101 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.101 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.101 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.102 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.102 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.102 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.102 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.102 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.102 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.102 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.102 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.105 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.105 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.105 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.105 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.105 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.105 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.105 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.105 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.106 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.106 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.106 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.106 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.106 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.106 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.106 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.107 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 24 21:58:58 compute-0 ceilometer_agent_ipmi[224600]: 2025-11-24 21:58:58.109 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 24 21:58:58 compute-0 sudo[224781]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:58 compute-0 podman[224878]: 2025-11-24 21:58:58.180333125 +0000 UTC m=+0.091502793 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, vendor=Red Hat, Inc., version=9.4, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, release=1214.1726694543)
Nov 24 21:58:58 compute-0 systemd[1]: 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db-246cf32c399782bf.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 21:58:58 compute-0 systemd[1]: 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db-246cf32c399782bf.service: Failed with result 'exit-code'.
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.567184       1 watcher.go:83] Using in cluster k8s config
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.567226       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 24 21:58:58 compute-0 kepler[224868]: E1124 21:58:58.567325       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.571943       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.572108       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.577069       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.577109       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.586370       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.586405       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.586421       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.597586       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.597813       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.597980       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.598156       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.598328       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.598562       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.598801       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.599005       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.599253       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.599483       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.599798       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 24 21:58:58 compute-0 kepler[224868]: I1124 21:58:58.600993       1 exporter.go:208] Started Kepler in 524.654352ms
Nov 24 21:58:58 compute-0 sudo[225064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osysvtttkyxdsspgqvcwgrwdjhhbmmrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021538.3454416-529-279200704083692/AnsiballZ_find.py'
Nov 24 21:58:58 compute-0 sudo[225064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:58:58 compute-0 python3.9[225066]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 21:58:59 compute-0 sudo[225064]: pam_unix(sudo:session): session closed for user root
Nov 24 21:58:59 compute-0 podman[203795]: time="2025-11-24T21:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 21:58:59 compute-0 podman[203795]: @ - - [24/Nov/2025:21:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28294 "" "Go-http-client/1.1"
Nov 24 21:58:59 compute-0 podman[203795]: @ - - [24/Nov/2025:21:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4263 "" "Go-http-client/1.1"
Nov 24 21:59:00 compute-0 sudo[225232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezenjlskbufnvvmdqwfxxjzpelaeipmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021539.5558295-539-234452071114666/AnsiballZ_podman_container_info.py'
Nov 24 21:59:00 compute-0 sudo[225232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:00 compute-0 podman[225190]: 2025-11-24 21:59:00.309975119 +0000 UTC m=+0.138833092 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 21:59:00 compute-0 python3.9[225241]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 24 21:59:00 compute-0 sudo[225232]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:01 compute-0 openstack_network_exporter[205945]: ERROR   21:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 21:59:01 compute-0 openstack_network_exporter[205945]: ERROR   21:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 21:59:01 compute-0 openstack_network_exporter[205945]: ERROR   21:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 21:59:01 compute-0 openstack_network_exporter[205945]: ERROR   21:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 21:59:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 21:59:01 compute-0 openstack_network_exporter[205945]: ERROR   21:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 21:59:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 21:59:01 compute-0 sudo[225404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejyhltwgcvkvlaktgkxecqnbumksrezr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021541.007932-547-168555556133738/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:01 compute-0 sudo[225404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:01 compute-0 python3.9[225406]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:02 compute-0 systemd[1]: Started libpod-conmon-d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94.scope.
Nov 24 21:59:02 compute-0 podman[225407]: 2025-11-24 21:59:02.096027648 +0000 UTC m=+0.145596477 container exec d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 24 21:59:02 compute-0 podman[225407]: 2025-11-24 21:59:02.10484015 +0000 UTC m=+0.154408949 container exec_died d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:59:02 compute-0 sudo[225404]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:02 compute-0 systemd[1]: libpod-conmon-d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94.scope: Deactivated successfully.
Nov 24 21:59:02 compute-0 sudo[225586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hylqtqqznideneofyysfsvwekwwgjdhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021542.4445684-555-185098990532735/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:02 compute-0 sudo[225586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:03 compute-0 python3.9[225588]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:03 compute-0 systemd[1]: Started libpod-conmon-d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94.scope.
Nov 24 21:59:03 compute-0 podman[225589]: 2025-11-24 21:59:03.406052725 +0000 UTC m=+0.121526543 container exec d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 21:59:03 compute-0 podman[225589]: 2025-11-24 21:59:03.442518981 +0000 UTC m=+0.157992769 container exec_died d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 21:59:03 compute-0 sudo[225586]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:03 compute-0 systemd[1]: libpod-conmon-d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94.scope: Deactivated successfully.
Nov 24 21:59:03 compute-0 podman[225605]: 2025-11-24 21:59:03.517804866 +0000 UTC m=+0.100085441 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 21:59:03 compute-0 podman[225602]: 2025-11-24 21:59:03.563719444 +0000 UTC m=+0.155767411 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 21:59:04 compute-0 nova_compute[189608]: 2025-11-24 21:59:04.053 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:59:04 compute-0 nova_compute[189608]: 2025-11-24 21:59:04.072 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:59:04 compute-0 sudo[225805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcikydjlvcminnjoowsmcpjvmjsuqxmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021543.772323-563-200259843104165/AnsiballZ_file.py'
Nov 24 21:59:04 compute-0 sudo[225805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:04 compute-0 python3.9[225807]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:04 compute-0 sudo[225805]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:05 compute-0 sudo[225960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuampgdchgfaudjhrvovvxoegwibouip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021544.8472843-572-63284775556486/AnsiballZ_podman_container_info.py'
Nov 24 21:59:05 compute-0 sudo[225960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:05 compute-0 python3.9[225962]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 24 21:59:05 compute-0 sudo[225960]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:05 compute-0 nova_compute[189608]: 2025-11-24 21:59:05.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:59:05 compute-0 nova_compute[189608]: 2025-11-24 21:59:05.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:59:05 compute-0 nova_compute[189608]: 2025-11-24 21:59:05.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:59:06 compute-0 sudo[226124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayisvarsjljujanjenvnfmuprssknbfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021546.0337553-580-224743221836886/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:06 compute-0 sudo[226124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:06 compute-0 python3.9[226126]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:06 compute-0 nova_compute[189608]: 2025-11-24 21:59:06.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:59:06 compute-0 nova_compute[189608]: 2025-11-24 21:59:06.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 21:59:06 compute-0 nova_compute[189608]: 2025-11-24 21:59:06.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 21:59:06 compute-0 nova_compute[189608]: 2025-11-24 21:59:06.811 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 21:59:06 compute-0 nova_compute[189608]: 2025-11-24 21:59:06.811 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:59:06 compute-0 nova_compute[189608]: 2025-11-24 21:59:06.811 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:59:06 compute-0 nova_compute[189608]: 2025-11-24 21:59:06.811 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:59:06 compute-0 nova_compute[189608]: 2025-11-24 21:59:06.850 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:59:06 compute-0 nova_compute[189608]: 2025-11-24 21:59:06.850 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:59:06 compute-0 nova_compute[189608]: 2025-11-24 21:59:06.850 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:59:06 compute-0 nova_compute[189608]: 2025-11-24 21:59:06.851 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 21:59:06 compute-0 systemd[1]: Started libpod-conmon-fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6.scope.
Nov 24 21:59:06 compute-0 podman[226127]: 2025-11-24 21:59:06.961098072 +0000 UTC m=+0.165263383 container exec fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 21:59:07 compute-0 podman[226127]: 2025-11-24 21:59:07.00084968 +0000 UTC m=+0.205014981 container exec_died fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:59:07 compute-0 sudo[226124]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:07 compute-0 systemd[1]: libpod-conmon-fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6.scope: Deactivated successfully.
Nov 24 21:59:07 compute-0 nova_compute[189608]: 2025-11-24 21:59:07.290 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 21:59:07 compute-0 nova_compute[189608]: 2025-11-24 21:59:07.292 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5676MB free_disk=72.26311492919922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 21:59:07 compute-0 nova_compute[189608]: 2025-11-24 21:59:07.292 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:59:07 compute-0 nova_compute[189608]: 2025-11-24 21:59:07.292 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:59:07 compute-0 nova_compute[189608]: 2025-11-24 21:59:07.357 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 21:59:07 compute-0 nova_compute[189608]: 2025-11-24 21:59:07.357 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 21:59:07 compute-0 nova_compute[189608]: 2025-11-24 21:59:07.391 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 21:59:07 compute-0 nova_compute[189608]: 2025-11-24 21:59:07.408 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 21:59:07 compute-0 nova_compute[189608]: 2025-11-24 21:59:07.410 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 21:59:07 compute-0 nova_compute[189608]: 2025-11-24 21:59:07.410 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:59:07 compute-0 sudo[226306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rljixztaxhyjomkekqkrvvdhdchjkbfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021547.3093352-588-157286840897181/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:07 compute-0 sudo[226306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:08 compute-0 python3.9[226308]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:08 compute-0 systemd[1]: Started libpod-conmon-fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6.scope.
Nov 24 21:59:08 compute-0 podman[226309]: 2025-11-24 21:59:08.14478408 +0000 UTC m=+0.114991051 container exec fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 21:59:08 compute-0 podman[226309]: 2025-11-24 21:59:08.179500823 +0000 UTC m=+0.149707704 container exec_died fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:59:08 compute-0 systemd[1]: libpod-conmon-fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6.scope: Deactivated successfully.
Nov 24 21:59:08 compute-0 sudo[226306]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:09 compute-0 sudo[226486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdgdfxmcdqqhnjznweixtvabuzkmxyfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021548.5666468-596-71365367353201/AnsiballZ_file.py'
Nov 24 21:59:09 compute-0 sudo[226486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:09 compute-0 python3.9[226488]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:09 compute-0 sudo[226486]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:09 compute-0 nova_compute[189608]: 2025-11-24 21:59:09.392 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 21:59:09 compute-0 nova_compute[189608]: 2025-11-24 21:59:09.393 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 21:59:10 compute-0 sudo[226638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bifotqzcwbftavatsajqlbmjnflbzuzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021549.568462-605-74327250904963/AnsiballZ_podman_container_info.py'
Nov 24 21:59:10 compute-0 sudo[226638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:10 compute-0 python3.9[226640]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Nov 24 21:59:10 compute-0 sudo[226638]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:11 compute-0 sudo[226803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awwlvicsjwzckeecrkykbhensfmmppvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021550.821527-613-115366249387280/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:11 compute-0 sudo[226803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:11 compute-0 python3.9[226805]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:11 compute-0 systemd[1]: Started libpod-conmon-5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe.scope.
Nov 24 21:59:11 compute-0 podman[226806]: 2025-11-24 21:59:11.714693957 +0000 UTC m=+0.130150349 container exec 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 21:59:11 compute-0 podman[226806]: 2025-11-24 21:59:11.749319416 +0000 UTC m=+0.164775788 container exec_died 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 21:59:11 compute-0 systemd[1]: libpod-conmon-5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe.scope: Deactivated successfully.
Nov 24 21:59:11 compute-0 sudo[226803]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:12 compute-0 sudo[226995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exryxedxlhrtygjcgtrmfusfpjsfrbri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021552.1159382-621-241762435407194/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:12 compute-0 sudo[226995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:12 compute-0 podman[226959]: 2025-11-24 21:59:12.778002238 +0000 UTC m=+0.120035007 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 21:59:12 compute-0 python3.9[227002]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:13 compute-0 systemd[1]: Started libpod-conmon-5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe.scope.
Nov 24 21:59:13 compute-0 podman[227009]: 2025-11-24 21:59:13.091130887 +0000 UTC m=+0.112317149 container exec 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd)
Nov 24 21:59:13 compute-0 podman[227009]: 2025-11-24 21:59:13.125130696 +0000 UTC m=+0.146316928 container exec_died 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:59:13 compute-0 systemd[1]: libpod-conmon-5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe.scope: Deactivated successfully.
Nov 24 21:59:13 compute-0 sudo[226995]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:13 compute-0 sudo[227187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toziywyukynigbpaqunjgqsowvwhkyqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021553.4492776-629-10582376551623/AnsiballZ_file.py'
Nov 24 21:59:13 compute-0 sudo[227187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:14 compute-0 python3.9[227189]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:14 compute-0 sudo[227187]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:15 compute-0 sudo[227339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fryazeozplbksrhjwycoxqmpxalblnsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021554.4531617-638-111908724428304/AnsiballZ_podman_container_info.py'
Nov 24 21:59:15 compute-0 sudo[227339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:15 compute-0 python3.9[227341]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 24 21:59:15 compute-0 sudo[227339]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:16 compute-0 sudo[227503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxgessagfjzsxovwqozghtmgqfbiqpmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021555.7095466-646-30509071744713/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:16 compute-0 sudo[227503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:16 compute-0 python3.9[227505]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:16 compute-0 systemd[1]: Started libpod-conmon-a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d.scope.
Nov 24 21:59:16 compute-0 podman[227506]: 2025-11-24 21:59:16.630973934 +0000 UTC m=+0.182245888 container exec a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 24 21:59:16 compute-0 podman[227506]: 2025-11-24 21:59:16.667519663 +0000 UTC m=+0.218791617 container exec_died a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, config_id=edpm)
Nov 24 21:59:16 compute-0 sudo[227503]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:16 compute-0 systemd[1]: libpod-conmon-a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d.scope: Deactivated successfully.
Nov 24 21:59:16 compute-0 podman[227536]: 2025-11-24 21:59:16.894065987 +0000 UTC m=+0.107383887 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true)
Nov 24 21:59:17 compute-0 sudo[227703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuznulplahslntpzpgflscvmkyanptjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021556.983688-654-200764931858863/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:17 compute-0 sudo[227703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:17 compute-0 python3.9[227705]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:18 compute-0 systemd[1]: Started libpod-conmon-a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d.scope.
Nov 24 21:59:18 compute-0 podman[227706]: 2025-11-24 21:59:18.066687483 +0000 UTC m=+0.144021777 container exec a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 21:59:18 compute-0 podman[227706]: 2025-11-24 21:59:18.106276386 +0000 UTC m=+0.183610640 container exec_died a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 24 21:59:18 compute-0 systemd[1]: libpod-conmon-a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d.scope: Deactivated successfully.
Nov 24 21:59:18 compute-0 sudo[227703]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:18 compute-0 sudo[227885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsmarrmcjfgktlictbznhghrlylyrvlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021558.4637237-662-270185135875041/AnsiballZ_file.py'
Nov 24 21:59:18 compute-0 sudo[227885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:19 compute-0 python3.9[227887]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:19 compute-0 sudo[227885]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:20 compute-0 sudo[228038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxjtyqhovizipiihymvvnwiswsokplga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021559.5113587-671-126993390099265/AnsiballZ_podman_container_info.py'
Nov 24 21:59:20 compute-0 sudo[228038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:20 compute-0 python3.9[228040]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 24 21:59:20 compute-0 sudo[228038]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:21 compute-0 sudo[228215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipoheuhnlxzqtgukndsawlzqbdbmbhxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021560.7045872-679-75409562795416/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:21 compute-0 sudo[228215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:21 compute-0 podman[228177]: 2025-11-24 21:59:21.259651601 +0000 UTC m=+0.100693010 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 21:59:21 compute-0 python3.9[228223]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:21 compute-0 systemd[1]: Started libpod-conmon-c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0.scope.
Nov 24 21:59:21 compute-0 podman[228225]: 2025-11-24 21:59:21.598470223 +0000 UTC m=+0.137552339 container exec c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 21:59:21 compute-0 podman[228225]: 2025-11-24 21:59:21.631781051 +0000 UTC m=+0.170863187 container exec_died c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 21:59:21 compute-0 systemd[1]: libpod-conmon-c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0.scope: Deactivated successfully.
Nov 24 21:59:21 compute-0 sudo[228215]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:22 compute-0 sudo[228403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctnwnhpdybbjadpeagmdhftrujyyzxap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021562.0280297-687-74686064714353/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:22 compute-0 sudo[228403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:22 compute-0 python3.9[228405]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:22 compute-0 systemd[1]: Started libpod-conmon-c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0.scope.
Nov 24 21:59:22 compute-0 podman[228406]: 2025-11-24 21:59:22.833276569 +0000 UTC m=+0.141403987 container exec c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 21:59:22 compute-0 podman[228406]: 2025-11-24 21:59:22.867262629 +0000 UTC m=+0.175390047 container exec_died c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 21:59:22 compute-0 sudo[228403]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:22 compute-0 systemd[1]: libpod-conmon-c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0.scope: Deactivated successfully.
Nov 24 21:59:23 compute-0 sudo[228586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmyjddxxcwlniyinalktxmzjosqifrnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021563.3206656-695-157676589087470/AnsiballZ_file.py'
Nov 24 21:59:23 compute-0 sudo[228586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:24 compute-0 python3.9[228588]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:24 compute-0 sudo[228586]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:24 compute-0 sudo[228738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlvyipelzxoucceftzrbscwyuemasoui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021564.380216-704-136750793248666/AnsiballZ_podman_container_info.py'
Nov 24 21:59:24 compute-0 sudo[228738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:25 compute-0 python3.9[228740]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 24 21:59:25 compute-0 sudo[228738]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:26 compute-0 sudo[228902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wixsjrigkxhiassmwcdabgnplrvsfjzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021565.550571-712-146441086474410/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:26 compute-0 sudo[228902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:26 compute-0 python3.9[228904]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:26 compute-0 systemd[1]: Started libpod-conmon-9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27.scope.
Nov 24 21:59:26 compute-0 podman[228905]: 2025-11-24 21:59:26.537528943 +0000 UTC m=+0.120157062 container exec 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 21:59:26 compute-0 podman[228905]: 2025-11-24 21:59:26.571127691 +0000 UTC m=+0.153755800 container exec_died 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 21:59:26 compute-0 podman[228911]: 2025-11-24 21:59:26.60900539 +0000 UTC m=+0.147591428 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 24 21:59:26 compute-0 systemd[1]: a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846-32d66634f6b2ff22.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 21:59:26 compute-0 systemd[1]: a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846-32d66634f6b2ff22.service: Failed with result 'exit-code'.
Nov 24 21:59:26 compute-0 systemd[1]: libpod-conmon-9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27.scope: Deactivated successfully.
Nov 24 21:59:26 compute-0 sudo[228902]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:27 compute-0 sudo[229102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sawiprfbzmmorrajzzxwijitzqwkitna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021566.9557867-720-80140073375625/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:27 compute-0 sudo[229102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:27 compute-0 python3.9[229104]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:27 compute-0 systemd[1]: Started libpod-conmon-9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27.scope.
Nov 24 21:59:27 compute-0 podman[229105]: 2025-11-24 21:59:27.851832714 +0000 UTC m=+0.150021283 container exec 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 21:59:27 compute-0 podman[229105]: 2025-11-24 21:59:27.887740953 +0000 UTC m=+0.185929512 container exec_died 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 21:59:27 compute-0 sudo[229102]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:27 compute-0 systemd[1]: libpod-conmon-9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27.scope: Deactivated successfully.
Nov 24 21:59:28 compute-0 podman[229136]: 2025-11-24 21:59:28.118198779 +0000 UTC m=+0.124129545 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 21:59:28 compute-0 podman[229248]: 2025-11-24 21:59:28.519910521 +0000 UTC m=+0.081880508 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, version=9.4, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm)
Nov 24 21:59:28 compute-0 sudo[229322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jysyfvikjwcqyknusanrmyhcszboevxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021568.2443364-728-92014862359909/AnsiballZ_file.py'
Nov 24 21:59:28 compute-0 sudo[229322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:28 compute-0 python3.9[229324]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:29 compute-0 sudo[229322]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:29 compute-0 podman[203795]: time="2025-11-24T21:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 21:59:29 compute-0 podman[203795]: @ - - [24/Nov/2025:21:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28293 "" "Go-http-client/1.1"
Nov 24 21:59:29 compute-0 podman[203795]: @ - - [24/Nov/2025:21:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4278 "" "Go-http-client/1.1"
Nov 24 21:59:29 compute-0 sudo[229474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctuhtsbwryrlpkssmjkrnzifzxzncwyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021569.3176017-737-10780116634058/AnsiballZ_podman_container_info.py'
Nov 24 21:59:29 compute-0 sudo[229474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:30 compute-0 python3.9[229476]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 24 21:59:30 compute-0 sudo[229474]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:30 compute-0 podman[229514]: 2025-11-24 21:59:30.593236049 +0000 UTC m=+0.138112866 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 21:59:31 compute-0 sudo[229663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dysydbufckjhohopjesslwxrppucjnjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021570.5428123-745-76994057178303/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:31 compute-0 sudo[229663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:31 compute-0 python3.9[229665]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:31 compute-0 openstack_network_exporter[205945]: ERROR   21:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 21:59:31 compute-0 openstack_network_exporter[205945]: ERROR   21:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 21:59:31 compute-0 openstack_network_exporter[205945]: ERROR   21:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 21:59:31 compute-0 openstack_network_exporter[205945]: ERROR   21:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 21:59:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 21:59:31 compute-0 openstack_network_exporter[205945]: ERROR   21:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 21:59:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 21:59:31 compute-0 systemd[1]: Started libpod-conmon-366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a.scope.
Nov 24 21:59:31 compute-0 podman[229666]: 2025-11-24 21:59:31.563420504 +0000 UTC m=+0.175621433 container exec 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Nov 24 21:59:31 compute-0 podman[229666]: 2025-11-24 21:59:31.601713417 +0000 UTC m=+0.213914356 container exec_died 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41)
Nov 24 21:59:31 compute-0 systemd[1]: libpod-conmon-366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a.scope: Deactivated successfully.
Nov 24 21:59:31 compute-0 sudo[229663]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:32 compute-0 sudo[229845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blyriqldnsrkfbrwdzigobbfrdrzqzbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021572.018742-753-57622681998916/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:32 compute-0 sudo[229845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:32 compute-0 python3.9[229847]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:33 compute-0 systemd[1]: Started libpod-conmon-366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a.scope.
Nov 24 21:59:33 compute-0 podman[229848]: 2025-11-24 21:59:33.055784353 +0000 UTC m=+0.147055341 container exec 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 21:59:33 compute-0 podman[229848]: 2025-11-24 21:59:33.089813544 +0000 UTC m=+0.181084552 container exec_died 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, distribution-scope=public)
Nov 24 21:59:33 compute-0 sudo[229845]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:33 compute-0 systemd[1]: libpod-conmon-366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a.scope: Deactivated successfully.
Nov 24 21:59:34 compute-0 sudo[230055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odsopgweoikvjfzozxectdlkgvxywtrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021573.4982772-761-139682097926610/AnsiballZ_file.py'
Nov 24 21:59:34 compute-0 sudo[230055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:34 compute-0 podman[230000]: 2025-11-24 21:59:34.052050964 +0000 UTC m=+0.093733405 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 24 21:59:34 compute-0 podman[229999]: 2025-11-24 21:59:34.097463626 +0000 UTC m=+0.149186727 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 21:59:34 compute-0 python3.9[230062]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:34 compute-0 sudo[230055]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:35 compute-0 sudo[230220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iylklopmjmgprchitcytfffactsyqsyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021574.6157558-770-21836248518674/AnsiballZ_podman_container_info.py'
Nov 24 21:59:35 compute-0 sudo[230220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:35 compute-0 python3.9[230222]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Nov 24 21:59:35 compute-0 sudo[230220]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:36 compute-0 sudo[230384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzhgmqlwpzkbagxgdbqpaevbydqbymuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021575.8329968-778-42752182910812/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:36 compute-0 sudo[230384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:36 compute-0 python3.9[230386]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:36 compute-0 systemd[1]: Started libpod-conmon-a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846.scope.
Nov 24 21:59:36 compute-0 podman[230387]: 2025-11-24 21:59:36.781062607 +0000 UTC m=+0.146830385 container exec a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 24 21:59:36 compute-0 podman[230387]: 2025-11-24 21:59:36.81614589 +0000 UTC m=+0.181913668 container exec_died a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 21:59:36 compute-0 sudo[230384]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:36 compute-0 systemd[1]: libpod-conmon-a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846.scope: Deactivated successfully.
Nov 24 21:59:37 compute-0 sudo[230567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzealiguokwycrbvmfpylbukaypubhsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021577.1408553-786-209129626976807/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:37 compute-0 sudo[230567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:37 compute-0 python3.9[230569]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:37 compute-0 systemd[1]: Started libpod-conmon-a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846.scope.
Nov 24 21:59:37 compute-0 podman[230570]: 2025-11-24 21:59:37.95812297 +0000 UTC m=+0.143576065 container exec a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 24 21:59:37 compute-0 podman[230570]: 2025-11-24 21:59:37.970032088 +0000 UTC m=+0.155485123 container exec_died a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 21:59:38 compute-0 sudo[230567]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:38 compute-0 systemd[1]: libpod-conmon-a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846.scope: Deactivated successfully.
Nov 24 21:59:38 compute-0 sudo[230749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgoewxwteqnlldwborofvffjsyjhjcsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021578.391436-794-22472006929463/AnsiballZ_file.py'
Nov 24 21:59:38 compute-0 sudo[230749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:39 compute-0 python3.9[230751]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:39 compute-0 sudo[230749]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:40 compute-0 sudo[230901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnilyjaxnqoffnwvomsnnolqcmqvdeoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021579.55513-803-276068259911594/AnsiballZ_podman_container_info.py'
Nov 24 21:59:40 compute-0 sudo[230901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:40 compute-0 python3.9[230903]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Nov 24 21:59:40 compute-0 sudo[230901]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:41 compute-0 sudo[231065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cktchrbyqfjadpciqkjfjrlrdywlqhfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021580.81077-811-34624882924002/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:41 compute-0 sudo[231065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:41 compute-0 python3.9[231067]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:41 compute-0 systemd[1]: Started libpod-conmon-178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db.scope.
Nov 24 21:59:41 compute-0 podman[231070]: 2025-11-24 21:59:41.765946032 +0000 UTC m=+0.169337280 container exec 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release=1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 24 21:59:41 compute-0 podman[231070]: 2025-11-24 21:59:41.800249311 +0000 UTC m=+0.203640549 container exec_died 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543)
Nov 24 21:59:41 compute-0 sshd-session[231068]: Invalid user node from 193.32.162.145 port 50756
Nov 24 21:59:41 compute-0 sudo[231065]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:41 compute-0 systemd[1]: libpod-conmon-178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db.scope: Deactivated successfully.
Nov 24 21:59:41 compute-0 sshd-session[231068]: Connection closed by invalid user node 193.32.162.145 port 50756 [preauth]
Nov 24 21:59:42 compute-0 sudo[231246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifuyrtyjnvhkmcgjcsblelvpemkmycpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021582.1698432-819-237493496768100/AnsiballZ_podman_container_exec.py'
Nov 24 21:59:42 compute-0 sudo[231246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:42 compute-0 python3.9[231248]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 24 21:59:42 compute-0 systemd[1]: Started libpod-conmon-178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db.scope.
Nov 24 21:59:43 compute-0 podman[231249]: 2025-11-24 21:59:43.018434155 +0000 UTC m=+0.115859009 container exec 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, release=1214.1726694543, io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Nov 24 21:59:43 compute-0 podman[231249]: 2025-11-24 21:59:43.055603512 +0000 UTC m=+0.153028306 container exec_died 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, config_id=edpm, container_name=kepler, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 24 21:59:43 compute-0 systemd[1]: libpod-conmon-178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db.scope: Deactivated successfully.
Nov 24 21:59:43 compute-0 sudo[231246]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:43 compute-0 podman[231262]: 2025-11-24 21:59:43.132482086 +0000 UTC m=+0.112498085 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 21:59:43 compute-0 sudo[231448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjmfoufoogwbymumfyzrkorvratfwcpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021583.3810453-827-46117725313652/AnsiballZ_file.py'
Nov 24 21:59:43 compute-0 sudo[231448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:44 compute-0 python3.9[231450]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:44 compute-0 sudo[231448]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:45 compute-0 sudo[231600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvknyfheiamjpxylifxrsxohvtlnesmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021584.5051062-836-170017005502623/AnsiballZ_file.py'
Nov 24 21:59:45 compute-0 sudo[231600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:45 compute-0 python3.9[231602]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:45 compute-0 sudo[231600]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:46 compute-0 sudo[231752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgziylerwsikhgstittfomoytozigtso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021585.615729-844-264062463282583/AnsiballZ_stat.py'
Nov 24 21:59:46 compute-0 sudo[231752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:46 compute-0 python3.9[231754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:59:46 compute-0 sudo[231752]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:47 compute-0 podman[231849]: 2025-11-24 21:59:47.073302554 +0000 UTC m=+0.092638151 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Nov 24 21:59:47 compute-0 sudo[231892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rexelhggjumfoupuaeolagcvvtinswjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021585.615729-844-264062463282583/AnsiballZ_copy.py'
Nov 24 21:59:47 compute-0 sudo[231892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:47 compute-0 python3.9[231894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764021585.615729-844-264062463282583/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:47 compute-0 sudo[231892]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:48 compute-0 sudo[232044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zilchshvbvkdcqsmkzmxvvglmhlkjwod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021587.6418526-860-133603324416983/AnsiballZ_file.py'
Nov 24 21:59:48 compute-0 sudo[232044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:48 compute-0 python3.9[232046]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:48 compute-0 sudo[232044]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:49 compute-0 sudo[232196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bolcotzmjowybtfexcezsgtscspftwxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021588.7693431-868-224011512984205/AnsiballZ_stat.py'
Nov 24 21:59:49 compute-0 sudo[232196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:49 compute-0 python3.9[232198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:59:49 compute-0 sudo[232196]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:49 compute-0 sudo[232275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgeutcdbonvxzqggdjfpqgsnwvlbpwmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021588.7693431-868-224011512984205/AnsiballZ_file.py'
Nov 24 21:59:49 compute-0 sudo[232275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:49 compute-0 python3.9[232277]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:50 compute-0 sudo[232275]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:50 compute-0 sudo[232427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceezjtwhfgodxoplyykddkhdqbwdprrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021590.274049-880-230146918887582/AnsiballZ_stat.py'
Nov 24 21:59:50 compute-0 sudo[232427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:50 compute-0 python3.9[232429]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:59:51 compute-0 sudo[232427]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:51 compute-0 sudo[232521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlnhedqilkrpvbgqxxsenhvelsylondn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021590.274049-880-230146918887582/AnsiballZ_file.py'
Nov 24 21:59:51 compute-0 podman[232479]: 2025-11-24 21:59:51.46844895 +0000 UTC m=+0.111979678 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm)
Nov 24 21:59:51 compute-0 sudo[232521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:51 compute-0 python3.9[232526]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9d55soiy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:51 compute-0 sudo[232521]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:52 compute-0 sudo[232676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfrcqfrmkrcxrvkybblegcvzmkensduk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021591.9787083-892-9765024435553/AnsiballZ_stat.py'
Nov 24 21:59:52 compute-0 sudo[232676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:52 compute-0 python3.9[232678]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:59:52 compute-0 sudo[232676]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:53 compute-0 sudo[232754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcwiamqoyghvyxszrscfukwdbftxvoim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021591.9787083-892-9765024435553/AnsiballZ_file.py'
Nov 24 21:59:53 compute-0 sudo[232754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:53 compute-0 python3.9[232756]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:53 compute-0 sudo[232754]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:54 compute-0 sudo[232906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvfcsnpwxntqgwoblciuqxchdtepwyna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021593.728939-905-247772028693088/AnsiballZ_command.py'
Nov 24 21:59:54 compute-0 sudo[232906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:54 compute-0 python3.9[232908]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 21:59:54 compute-0 sudo[232906]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:59:54.551 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:59:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:59:54.552 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:59:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 21:59:54.552 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:59:55 compute-0 sudo[233059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrpkhosscivupjodlutfixocwtdvrmfx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021594.7380724-913-69318716609682/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 21:59:55 compute-0 sudo[233059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:55 compute-0 python3[233061]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 21:59:55 compute-0 sudo[233059]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:56 compute-0 sudo[233211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohfocahflbwfnlcjydxdmngnlqiuymub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021595.9370787-921-48889245320232/AnsiballZ_stat.py'
Nov 24 21:59:56 compute-0 sudo[233211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:56 compute-0 python3.9[233213]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:59:56 compute-0 sudo[233211]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:57 compute-0 sudo[233304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shedzuuejtkcysdabnkmveagspjxjymv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021595.9370787-921-48889245320232/AnsiballZ_file.py'
Nov 24 21:59:57 compute-0 podman[233263]: 2025-11-24 21:59:57.188032541 +0000 UTC m=+0.116261861 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 24 21:59:57 compute-0 sudo[233304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:57 compute-0 python3.9[233309]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:57 compute-0 sudo[233304]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:58 compute-0 podman[233433]: 2025-11-24 21:59:58.300990985 +0000 UTC m=+0.069746554 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Nov 24 21:59:58 compute-0 sudo[233479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-labaqyjuwaaqnlgunshonnytreizdjva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021597.732563-933-202062315689724/AnsiballZ_stat.py'
Nov 24 21:59:58 compute-0 sudo[233479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:58 compute-0 python3.9[233481]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 21:59:58 compute-0 sudo[233479]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:58 compute-0 sudo[233576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clzlorgwmqzwfiyujusikpluqmhwjzlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021597.732563-933-202062315689724/AnsiballZ_file.py'
Nov 24 21:59:58 compute-0 podman[233531]: 2025-11-24 21:59:58.922709902 +0000 UTC m=+0.101833876 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Nov 24 21:59:58 compute-0 sudo[233576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:59:59 compute-0 python3.9[233579]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 21:59:59 compute-0 sudo[233576]: pam_unix(sudo:session): session closed for user root
Nov 24 21:59:59 compute-0 podman[203795]: time="2025-11-24T21:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 21:59:59 compute-0 podman[203795]: @ - - [24/Nov/2025:21:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 21:59:59 compute-0 podman[203795]: @ - - [24/Nov/2025:21:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4274 "" "Go-http-client/1.1"
Nov 24 21:59:59 compute-0 sudo[233729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgdfkgloeanekkypqonubikaihfgqmxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021599.4095826-945-276745646870349/AnsiballZ_stat.py'
Nov 24 21:59:59 compute-0 sudo[233729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:00 compute-0 python3.9[233731]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 22:00:00 compute-0 sudo[233729]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:00 compute-0 sudo[233807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqnfreevrzqtswyligawczzgjldtplxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021599.4095826-945-276745646870349/AnsiballZ_file.py'
Nov 24 22:00:00 compute-0 sudo[233807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:00 compute-0 podman[233809]: 2025-11-24 22:00:00.73442217 +0000 UTC m=+0.084631523 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 24 22:00:00 compute-0 python3.9[233810]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 22:00:00 compute-0 sudo[233807]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:01 compute-0 openstack_network_exporter[205945]: ERROR   22:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:00:01 compute-0 openstack_network_exporter[205945]: ERROR   22:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:00:01 compute-0 openstack_network_exporter[205945]: ERROR   22:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:00:01 compute-0 openstack_network_exporter[205945]: ERROR   22:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:00:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:00:01 compute-0 openstack_network_exporter[205945]: ERROR   22:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:00:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:00:01 compute-0 sudo[233983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zihnqtesmyglbbiunxhxaomtyjxyxltm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021601.1486135-957-280145379772440/AnsiballZ_stat.py'
Nov 24 22:00:01 compute-0 sudo[233983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:01 compute-0 python3.9[233985]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 22:00:01 compute-0 sudo[233983]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:02 compute-0 sudo[234061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doviybggqwsxgcgeuqldglvzjhanxhjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021601.1486135-957-280145379772440/AnsiballZ_file.py'
Nov 24 22:00:02 compute-0 sudo[234061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:02 compute-0 nova_compute[189608]: 2025-11-24 22:00:02.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:00:02 compute-0 nova_compute[189608]: 2025-11-24 22:00:02.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 22:00:02 compute-0 nova_compute[189608]: 2025-11-24 22:00:02.807 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 22:00:02 compute-0 nova_compute[189608]: 2025-11-24 22:00:02.807 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:00:02 compute-0 nova_compute[189608]: 2025-11-24 22:00:02.808 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 22:00:02 compute-0 nova_compute[189608]: 2025-11-24 22:00:02.816 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:00:03 compute-0 python3.9[234063]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 22:00:03 compute-0 sudo[234061]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:03 compute-0 sudo[234213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqudkophpausozfsttdhohaufsjrjbzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021603.2815342-969-29165742125334/AnsiballZ_stat.py'
Nov 24 22:00:03 compute-0 sudo[234213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:04 compute-0 python3.9[234215]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 22:00:04 compute-0 sudo[234213]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:04 compute-0 podman[234274]: 2025-11-24 22:00:04.585784948 +0000 UTC m=+0.119924664 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:00:04 compute-0 podman[234268]: 2025-11-24 22:00:04.62960458 +0000 UTC m=+0.170184225 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:00:04 compute-0 sudo[234379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aazrlsbxrjmmwrmldrpfbtpmhokqkbhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021603.2815342-969-29165742125334/AnsiballZ_copy.py'
Nov 24 22:00:04 compute-0 sudo[234379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:04 compute-0 nova_compute[189608]: 2025-11-24 22:00:04.824 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:00:04 compute-0 python3.9[234381]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764021603.2815342-969-29165742125334/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 22:00:04 compute-0 sudo[234379]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:05 compute-0 sudo[234531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgkziefjfjytgmgtkqvtrrxzkntlivog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021605.3268917-984-189462077642327/AnsiballZ_file.py'
Nov 24 22:00:05 compute-0 sudo[234531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:06 compute-0 python3.9[234533]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 22:00:06 compute-0 sudo[234531]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:06 compute-0 nova_compute[189608]: 2025-11-24 22:00:06.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:00:06 compute-0 nova_compute[189608]: 2025-11-24 22:00:06.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:00:06 compute-0 nova_compute[189608]: 2025-11-24 22:00:06.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:00:06 compute-0 nova_compute[189608]: 2025-11-24 22:00:06.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:00:06 compute-0 nova_compute[189608]: 2025-11-24 22:00:06.833 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:00:06 compute-0 nova_compute[189608]: 2025-11-24 22:00:06.833 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:00:06 compute-0 nova_compute[189608]: 2025-11-24 22:00:06.834 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:00:06 compute-0 nova_compute[189608]: 2025-11-24 22:00:06.835 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:00:06 compute-0 sudo[234683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izcqqxzhebysjeojooyrswuvnzayozno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021606.3170245-992-224241756432172/AnsiballZ_command.py'
Nov 24 22:00:06 compute-0 sudo[234683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:07 compute-0 python3.9[234685]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 22:00:07 compute-0 sudo[234683]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.260 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.261 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5689MB free_disk=72.26323699951172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.261 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.261 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.445 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.446 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.565 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing inventories for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.737 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating ProviderTree inventory for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.737 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating inventory in ProviderTree for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.766 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing aggregate associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.814 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing trait associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, traits: HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.870 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.890 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.893 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:00:07 compute-0 nova_compute[189608]: 2025-11-24 22:00:07.893 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:00:07 compute-0 sshd-session[234766]: Invalid user sol from 45.148.10.240 port 36202
Nov 24 22:00:08 compute-0 sshd-session[234766]: Connection closed by invalid user sol 45.148.10.240 port 36202 [preauth]
Nov 24 22:00:08 compute-0 sudo[234841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwbqkmfhccnschgiumfnmwflowphxdiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021607.5303404-1000-52389910625132/AnsiballZ_blockinfile.py'
Nov 24 22:00:08 compute-0 sudo[234841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:08 compute-0 python3.9[234843]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 22:00:08 compute-0 sudo[234841]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:08 compute-0 nova_compute[189608]: 2025-11-24 22:00:08.894 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:00:08 compute-0 nova_compute[189608]: 2025-11-24 22:00:08.895 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:00:08 compute-0 nova_compute[189608]: 2025-11-24 22:00:08.895 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:00:08 compute-0 nova_compute[189608]: 2025-11-24 22:00:08.910 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:00:08 compute-0 nova_compute[189608]: 2025-11-24 22:00:08.911 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:00:08 compute-0 nova_compute[189608]: 2025-11-24 22:00:08.912 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:00:09 compute-0 sudo[234993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgfhkfjzpnbeyamgaxqidfftritchpbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021608.7540226-1009-189520239033022/AnsiballZ_command.py'
Nov 24 22:00:09 compute-0 sudo[234993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:09 compute-0 python3.9[234995]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 22:00:09 compute-0 sudo[234993]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:09 compute-0 nova_compute[189608]: 2025-11-24 22:00:09.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:00:09 compute-0 nova_compute[189608]: 2025-11-24 22:00:09.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:00:10 compute-0 sudo[235146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlmeufwpdweuudbywlqebbkudnpvdbfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021609.7756667-1017-133764953929613/AnsiballZ_stat.py'
Nov 24 22:00:10 compute-0 sudo[235146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:10 compute-0 python3.9[235148]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 22:00:10 compute-0 sudo[235146]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:11 compute-0 sudo[235300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miyqpeeorpirlcbavuxxsubwycctazzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021610.7771451-1025-224061880378300/AnsiballZ_command.py'
Nov 24 22:00:11 compute-0 sudo[235300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:11 compute-0 python3.9[235302]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 22:00:11 compute-0 sudo[235300]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:12 compute-0 sudo[235455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbvaohbdciatbyhexsnxaskyvrtmyzds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021611.9100478-1033-52487684385361/AnsiballZ_file.py'
Nov 24 22:00:12 compute-0 sudo[235455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:12 compute-0 python3.9[235457]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 22:00:12 compute-0 sudo[235455]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:13 compute-0 sshd-session[215083]: Connection closed by 192.168.122.30 port 36452
Nov 24 22:00:13 compute-0 sshd-session[215079]: pam_unix(sshd:session): session closed for user zuul
Nov 24 22:00:13 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Nov 24 22:00:13 compute-0 systemd[1]: session-27.scope: Consumed 1min 50.440s CPU time.
Nov 24 22:00:13 compute-0 systemd-logind[806]: Session 27 logged out. Waiting for processes to exit.
Nov 24 22:00:13 compute-0 systemd-logind[806]: Removed session 27.
Nov 24 22:00:13 compute-0 podman[235482]: 2025-11-24 22:00:13.259278598 +0000 UTC m=+0.076821097 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:00:17 compute-0 podman[235506]: 2025-11-24 22:00:17.532762151 +0000 UTC m=+0.097683472 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.616 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.617 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.617 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.618 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.621 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.624 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b194f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.625 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.625 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.628 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.628 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.628 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.630 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.630 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.630 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.631 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.631 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.631 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.631 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.632 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.633 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:00:17.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:00:18 compute-0 sshd-session[235526]: Accepted publickey for zuul from 192.168.122.30 port 40472 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 22:00:18 compute-0 systemd-logind[806]: New session 28 of user zuul.
Nov 24 22:00:18 compute-0 systemd[1]: Started Session 28 of User zuul.
Nov 24 22:00:18 compute-0 sshd-session[235526]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 22:00:19 compute-0 python3.9[235679]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 22:00:21 compute-0 sudo[235834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqnpwmfmnwhsumzkvnnovwgdpohuoifa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021620.3398714-34-63890432171881/AnsiballZ_systemd.py'
Nov 24 22:00:21 compute-0 sudo[235834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:21 compute-0 python3.9[235836]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Nov 24 22:00:21 compute-0 sudo[235834]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:22 compute-0 sudo[236003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enpstacsbjfmkakavjnhybnknsozyvoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021621.7360775-42-27974595234197/AnsiballZ_setup.py'
Nov 24 22:00:22 compute-0 sudo[236003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:22 compute-0 podman[235961]: 2025-11-24 22:00:22.309196828 +0000 UTC m=+0.130930280 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 24 22:00:22 compute-0 python3.9[236008]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 22:00:22 compute-0 sudo[236003]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:23 compute-0 sudo[236091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awwirhhxfqereuxdzrlcwfhhutbxvwdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021621.7360775-42-27974595234197/AnsiballZ_dnf.py'
Nov 24 22:00:23 compute-0 sudo[236091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:23 compute-0 python3.9[236093]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 22:00:26 compute-0 sudo[236091]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:27 compute-0 sudo[236249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exuwcbgoiiatgmyvjscotkewvgdcjgap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021626.4243135-54-38309086612871/AnsiballZ_stat.py'
Nov 24 22:00:27 compute-0 sudo[236249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:27 compute-0 python3.9[236251]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 22:00:27 compute-0 sudo[236249]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:27 compute-0 podman[236282]: 2025-11-24 22:00:27.572229282 +0000 UTC m=+0.122927832 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 22:00:27 compute-0 sudo[236390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sftdpboiagautqouuxpthlnunmeryfxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021626.4243135-54-38309086612871/AnsiballZ_copy.py'
Nov 24 22:00:27 compute-0 sudo[236390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:28 compute-0 python3.9[236392]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764021626.4243135-54-38309086612871/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 22:00:28 compute-0 sudo[236390]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:28 compute-0 podman[236453]: 2025-11-24 22:00:28.574491386 +0000 UTC m=+0.123467959 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, container_name=openstack_network_exporter, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal)
Nov 24 22:00:29 compute-0 sudo[236562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkuygcvnmubnipdqzoglvfhhuypfctmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021628.3623478-69-112072236585898/AnsiballZ_file.py'
Nov 24 22:00:29 compute-0 sudo[236562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:29 compute-0 podman[236564]: 2025-11-24 22:00:29.10332623 +0000 UTC m=+0.084149033 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, architecture=x86_64, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543)
Nov 24 22:00:29 compute-0 python3.9[236565]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 22:00:29 compute-0 sudo[236562]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:29 compute-0 podman[203795]: time="2025-11-24T22:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:00:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:00:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4281 "" "Go-http-client/1.1"
Nov 24 22:00:30 compute-0 sudo[236733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvkczmwcnljrkaxkvjabkrxjswhnahkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021629.516003-77-174446151236436/AnsiballZ_stat.py'
Nov 24 22:00:30 compute-0 sudo[236733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:30 compute-0 python3.9[236735]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 22:00:30 compute-0 sudo[236733]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:30 compute-0 sudo[236856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siewdrppobxblxuocbpiuyazrfgldihj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021629.516003-77-174446151236436/AnsiballZ_copy.py'
Nov 24 22:00:30 compute-0 sudo[236856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:30 compute-0 python3.9[236858]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764021629.516003-77-174446151236436/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 22:00:30 compute-0 sudo[236856]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:31 compute-0 openstack_network_exporter[205945]: ERROR   22:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:00:31 compute-0 openstack_network_exporter[205945]: ERROR   22:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:00:31 compute-0 openstack_network_exporter[205945]: ERROR   22:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:00:31 compute-0 openstack_network_exporter[205945]: ERROR   22:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:00:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:00:31 compute-0 openstack_network_exporter[205945]: ERROR   22:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:00:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:00:31 compute-0 podman[236958]: 2025-11-24 22:00:31.5428249 +0000 UTC m=+0.095250676 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:00:31 compute-0 sudo[237030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kugzkrmjavueeytuvzrgnpkzeypujbdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764021631.2297528-92-175329969042412/AnsiballZ_systemd.py'
Nov 24 22:00:31 compute-0 sudo[237030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:00:31 compute-0 python3.9[237032]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 22:00:32 compute-0 systemd[1]: Stopping System Logging Service...
Nov 24 22:00:32 compute-0 rsyslogd[1009]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1009" x-info="https://www.rsyslog.com"] exiting on signal 15.
Nov 24 22:00:32 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Nov 24 22:00:32 compute-0 systemd[1]: Stopped System Logging Service.
Nov 24 22:00:32 compute-0 systemd[1]: rsyslog.service: Consumed 4.312s CPU time, 9.4M memory peak, read 0B from disk, written 7.1M to disk.
Nov 24 22:00:32 compute-0 systemd[1]: Starting System Logging Service...
Nov 24 22:00:32 compute-0 rsyslogd[237036]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="237036" x-info="https://www.rsyslog.com"] start
Nov 24 22:00:32 compute-0 systemd[1]: Started System Logging Service.
Nov 24 22:00:32 compute-0 rsyslogd[237036]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 22:00:32 compute-0 rsyslogd[237036]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Nov 24 22:00:32 compute-0 rsyslogd[237036]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Nov 24 22:00:32 compute-0 rsyslogd[237036]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Nov 24 22:00:32 compute-0 sudo[237030]: pam_unix(sudo:session): session closed for user root
Nov 24 22:00:32 compute-0 rsyslogd[237036]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Nov 24 22:00:32 compute-0 sshd-session[235529]: Connection closed by 192.168.122.30 port 40472
Nov 24 22:00:32 compute-0 sshd-session[235526]: pam_unix(sshd:session): session closed for user zuul
Nov 24 22:00:32 compute-0 systemd-logind[806]: Session 28 logged out. Waiting for processes to exit.
Nov 24 22:00:32 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Nov 24 22:00:32 compute-0 systemd[1]: session-28.scope: Consumed 11.301s CPU time.
Nov 24 22:00:32 compute-0 systemd-logind[806]: Removed session 28.
Nov 24 22:00:35 compute-0 podman[237066]: 2025-11-24 22:00:35.510546449 +0000 UTC m=+0.067848189 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:00:35 compute-0 podman[237065]: 2025-11-24 22:00:35.564436975 +0000 UTC m=+0.124152189 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 22:00:43 compute-0 podman[237106]: 2025-11-24 22:00:43.55682437 +0000 UTC m=+0.103304595 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:00:48 compute-0 podman[237129]: 2025-11-24 22:00:48.548282897 +0000 UTC m=+0.101978074 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 24 22:00:52 compute-0 podman[237151]: 2025-11-24 22:00:52.582945595 +0000 UTC m=+0.129445353 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true)
Nov 24 22:00:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:00:54.552 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:00:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:00:54.553 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:00:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:00:54.554 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:00:58 compute-0 podman[237171]: 2025-11-24 22:00:58.560212547 +0000 UTC m=+0.112664226 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:00:59 compute-0 podman[237190]: 2025-11-24 22:00:59.572000726 +0000 UTC m=+0.125899805 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, version=9.6, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41)
Nov 24 22:00:59 compute-0 podman[237189]: 2025-11-24 22:00:59.582262373 +0000 UTC m=+0.139304429 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, distribution-scope=public, release-0.7.12=, managed_by=edpm_ansible, io.buildah.version=1.29.0, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler)
Nov 24 22:00:59 compute-0 podman[203795]: time="2025-11-24T22:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:00:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:00:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4273 "" "Go-http-client/1.1"
Nov 24 22:01:01 compute-0 openstack_network_exporter[205945]: ERROR   22:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:01:01 compute-0 openstack_network_exporter[205945]: ERROR   22:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:01:01 compute-0 openstack_network_exporter[205945]: ERROR   22:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:01:01 compute-0 openstack_network_exporter[205945]: ERROR   22:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:01:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:01:01 compute-0 openstack_network_exporter[205945]: ERROR   22:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:01:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:01:01 compute-0 CROND[237229]: (root) CMD (run-parts /etc/cron.hourly)
Nov 24 22:01:01 compute-0 run-parts[237232]: (/etc/cron.hourly) starting 0anacron
Nov 24 22:01:01 compute-0 anacron[237240]: Anacron started on 2025-11-24
Nov 24 22:01:01 compute-0 anacron[237240]: Will run job `cron.daily' in 20 min.
Nov 24 22:01:01 compute-0 anacron[237240]: Will run job `cron.weekly' in 40 min.
Nov 24 22:01:01 compute-0 anacron[237240]: Will run job `cron.monthly' in 60 min.
Nov 24 22:01:01 compute-0 anacron[237240]: Jobs will be executed sequentially
Nov 24 22:01:01 compute-0 run-parts[237242]: (/etc/cron.hourly) finished 0anacron
Nov 24 22:01:01 compute-0 CROND[237228]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 24 22:01:02 compute-0 podman[237243]: 2025-11-24 22:01:02.545504208 +0000 UTC m=+0.094077871 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:01:05 compute-0 nova_compute[189608]: 2025-11-24 22:01:05.790 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:01:05 compute-0 nova_compute[189608]: 2025-11-24 22:01:05.810 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:01:06 compute-0 podman[237268]: 2025-11-24 22:01:06.550938763 +0000 UTC m=+0.087919940 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 22:01:06 compute-0 podman[237267]: 2025-11-24 22:01:06.597685458 +0000 UTC m=+0.147566444 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 22:01:07 compute-0 nova_compute[189608]: 2025-11-24 22:01:07.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:01:07 compute-0 nova_compute[189608]: 2025-11-24 22:01:07.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:01:07 compute-0 nova_compute[189608]: 2025-11-24 22:01:07.835 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:01:07 compute-0 nova_compute[189608]: 2025-11-24 22:01:07.836 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:01:07 compute-0 nova_compute[189608]: 2025-11-24 22:01:07.836 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:01:07 compute-0 nova_compute[189608]: 2025-11-24 22:01:07.837 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:01:08 compute-0 nova_compute[189608]: 2025-11-24 22:01:08.339 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:01:08 compute-0 nova_compute[189608]: 2025-11-24 22:01:08.341 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5673MB free_disk=72.26186752319336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:01:08 compute-0 nova_compute[189608]: 2025-11-24 22:01:08.341 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:01:08 compute-0 nova_compute[189608]: 2025-11-24 22:01:08.341 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:01:08 compute-0 nova_compute[189608]: 2025-11-24 22:01:08.410 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:01:08 compute-0 nova_compute[189608]: 2025-11-24 22:01:08.411 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:01:08 compute-0 nova_compute[189608]: 2025-11-24 22:01:08.449 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:01:08 compute-0 nova_compute[189608]: 2025-11-24 22:01:08.465 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:01:08 compute-0 nova_compute[189608]: 2025-11-24 22:01:08.468 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:01:08 compute-0 nova_compute[189608]: 2025-11-24 22:01:08.469 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:01:09 compute-0 nova_compute[189608]: 2025-11-24 22:01:09.465 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:01:09 compute-0 nova_compute[189608]: 2025-11-24 22:01:09.466 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:01:09 compute-0 nova_compute[189608]: 2025-11-24 22:01:09.467 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:01:09 compute-0 nova_compute[189608]: 2025-11-24 22:01:09.468 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:01:09 compute-0 nova_compute[189608]: 2025-11-24 22:01:09.482 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:01:09 compute-0 nova_compute[189608]: 2025-11-24 22:01:09.483 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:01:09 compute-0 nova_compute[189608]: 2025-11-24 22:01:09.484 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:01:09 compute-0 nova_compute[189608]: 2025-11-24 22:01:09.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:01:09 compute-0 nova_compute[189608]: 2025-11-24 22:01:09.796 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:01:09 compute-0 nova_compute[189608]: 2025-11-24 22:01:09.797 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:01:13 compute-0 sshd-session[237307]: Accepted publickey for zuul from 38.102.83.200 port 39782 ssh2: RSA SHA256:s+FA6JzLhmwu0B38HrAw2rBq4K/LlOWNM3GY6zwZaMs
Nov 24 22:01:13 compute-0 systemd-logind[806]: New session 29 of user zuul.
Nov 24 22:01:13 compute-0 systemd[1]: Started Session 29 of User zuul.
Nov 24 22:01:13 compute-0 sshd-session[237307]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 22:01:14 compute-0 podman[237459]: 2025-11-24 22:01:14.235136601 +0000 UTC m=+0.113155444 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:01:14 compute-0 python3[237502]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 22:01:16 compute-0 sudo[237730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dinwhpeimdpsjjfhoqshgmdynzfjfopo ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021676.0199466-36854-179690648360201/AnsiballZ_command.py'
Nov 24 22:01:16 compute-0 sudo[237730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:01:16 compute-0 python3[237732]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 22:01:17 compute-0 sudo[237730]: pam_unix(sudo:session): session closed for user root
Nov 24 22:01:18 compute-0 sudo[237883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wefoccococwukwzvktqkqnpmyepgjahn ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021677.512656-36865-249129956950661/AnsiballZ_command.py'
Nov 24 22:01:18 compute-0 sudo[237883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:01:18 compute-0 python3[237885]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 22:01:19 compute-0 podman[237888]: 2025-11-24 22:01:19.579156277 +0000 UTC m=+0.141337301 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 24 22:01:19 compute-0 sudo[237883]: pam_unix(sudo:session): session closed for user root
Nov 24 22:01:21 compute-0 python3[238056]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 22:01:22 compute-0 sudo[238207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udnasbapxopnxlaafulptptxhylpgvqm ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021681.8574326-36909-69990757435864/AnsiballZ_setup.py'
Nov 24 22:01:22 compute-0 sudo[238207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:01:22 compute-0 python3[238209]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 22:01:23 compute-0 podman[238267]: 2025-11-24 22:01:23.564383386 +0000 UTC m=+0.111818753 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 22:01:23 compute-0 sudo[238207]: pam_unix(sudo:session): session closed for user root
Nov 24 22:01:25 compute-0 sudo[238452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhweiunyntyzftedluxxqgoyausgfvxw ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021684.562103-36938-34038142351793/AnsiballZ_command.py'
Nov 24 22:01:25 compute-0 sudo[238452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:01:25 compute-0 python3[238454]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 22:01:25 compute-0 sudo[238452]: pam_unix(sudo:session): session closed for user root
Nov 24 22:01:26 compute-0 sudo[238617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzutyhpokfwcijbjdaonuimdbranrdsh ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764021685.8822718-36955-107916481235932/AnsiballZ_command.py'
Nov 24 22:01:26 compute-0 sudo[238617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:01:26 compute-0 python3[238619]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 22:01:26 compute-0 sudo[238617]: pam_unix(sudo:session): session closed for user root
Nov 24 22:01:29 compute-0 podman[238658]: 2025-11-24 22:01:29.572196537 +0000 UTC m=+0.119485949 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:01:29 compute-0 podman[238678]: 2025-11-24 22:01:29.760160434 +0000 UTC m=+0.116895390 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, architecture=x86_64, config_id=edpm, name=ubi9, vcs-type=git, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, com.redhat.component=ubi9-container)
Nov 24 22:01:29 compute-0 podman[203795]: time="2025-11-24T22:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:01:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:01:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4281 "" "Go-http-client/1.1"
Nov 24 22:01:29 compute-0 podman[238679]: 2025-11-24 22:01:29.795256334 +0000 UTC m=+0.142206069 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public)
Nov 24 22:01:31 compute-0 openstack_network_exporter[205945]: ERROR   22:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:01:31 compute-0 openstack_network_exporter[205945]: ERROR   22:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:01:31 compute-0 openstack_network_exporter[205945]: ERROR   22:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:01:31 compute-0 openstack_network_exporter[205945]: ERROR   22:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:01:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:01:31 compute-0 openstack_network_exporter[205945]: ERROR   22:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:01:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:01:33 compute-0 podman[238714]: 2025-11-24 22:01:33.58226148 +0000 UTC m=+0.129197458 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 24 22:01:37 compute-0 podman[238738]: 2025-11-24 22:01:37.590268599 +0000 UTC m=+0.131284252 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 24 22:01:37 compute-0 podman[238737]: 2025-11-24 22:01:37.624760491 +0000 UTC m=+0.171610633 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:01:44 compute-0 podman[238782]: 2025-11-24 22:01:44.529599986 +0000 UTC m=+0.082730767 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:01:50 compute-0 podman[238807]: 2025-11-24 22:01:50.520810804 +0000 UTC m=+0.081911782 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:01:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:01:54.554 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:01:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:01:54.555 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:01:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:01:54.555 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:01:54 compute-0 podman[238825]: 2025-11-24 22:01:54.560927052 +0000 UTC m=+0.111375679 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm)
Nov 24 22:01:59 compute-0 podman[203795]: time="2025-11-24T22:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:01:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:01:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4277 "" "Go-http-client/1.1"
Nov 24 22:02:00 compute-0 sshd-session[238844]: Invalid user ubuntu from 45.148.10.240 port 38872
Nov 24 22:02:00 compute-0 sshd-session[238844]: Connection closed by invalid user ubuntu 45.148.10.240 port 38872 [preauth]
Nov 24 22:02:00 compute-0 podman[238847]: 2025-11-24 22:02:00.160302399 +0000 UTC m=+0.103201758 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, version=9.6, config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, name=ubi9-minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter)
Nov 24 22:02:00 compute-0 podman[238848]: 2025-11-24 22:02:00.175646721 +0000 UTC m=+0.101162285 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 22:02:00 compute-0 podman[238846]: 2025-11-24 22:02:00.193986616 +0000 UTC m=+0.131685425 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.expose-services=, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 24 22:02:01 compute-0 openstack_network_exporter[205945]: ERROR   22:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:02:01 compute-0 openstack_network_exporter[205945]: ERROR   22:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:02:01 compute-0 openstack_network_exporter[205945]: ERROR   22:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:02:01 compute-0 openstack_network_exporter[205945]: ERROR   22:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:02:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:02:01 compute-0 openstack_network_exporter[205945]: ERROR   22:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:02:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:02:04 compute-0 podman[238900]: 2025-11-24 22:02:04.519132478 +0000 UTC m=+0.076668481 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:02:05 compute-0 nova_compute[189608]: 2025-11-24 22:02:05.798 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:02:08 compute-0 podman[238926]: 2025-11-24 22:02:08.52531026 +0000 UTC m=+0.085264566 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 22:02:08 compute-0 podman[238925]: 2025-11-24 22:02:08.606478079 +0000 UTC m=+0.161248175 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 24 22:02:08 compute-0 nova_compute[189608]: 2025-11-24 22:02:08.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:02:08 compute-0 nova_compute[189608]: 2025-11-24 22:02:08.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:02:08 compute-0 nova_compute[189608]: 2025-11-24 22:02:08.825 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:02:08 compute-0 nova_compute[189608]: 2025-11-24 22:02:08.825 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:02:08 compute-0 nova_compute[189608]: 2025-11-24 22:02:08.826 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:02:08 compute-0 nova_compute[189608]: 2025-11-24 22:02:08.826 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:02:09 compute-0 nova_compute[189608]: 2025-11-24 22:02:09.366 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:02:09 compute-0 nova_compute[189608]: 2025-11-24 22:02:09.368 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5704MB free_disk=72.25985717773438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:02:09 compute-0 nova_compute[189608]: 2025-11-24 22:02:09.368 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:02:09 compute-0 nova_compute[189608]: 2025-11-24 22:02:09.369 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:02:09 compute-0 nova_compute[189608]: 2025-11-24 22:02:09.441 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:02:09 compute-0 nova_compute[189608]: 2025-11-24 22:02:09.442 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:02:09 compute-0 nova_compute[189608]: 2025-11-24 22:02:09.471 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:02:09 compute-0 nova_compute[189608]: 2025-11-24 22:02:09.486 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:02:09 compute-0 nova_compute[189608]: 2025-11-24 22:02:09.489 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:02:09 compute-0 nova_compute[189608]: 2025-11-24 22:02:09.490 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:02:10 compute-0 nova_compute[189608]: 2025-11-24 22:02:10.491 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:02:10 compute-0 nova_compute[189608]: 2025-11-24 22:02:10.492 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:02:10 compute-0 nova_compute[189608]: 2025-11-24 22:02:10.492 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:02:10 compute-0 nova_compute[189608]: 2025-11-24 22:02:10.507 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:02:10 compute-0 nova_compute[189608]: 2025-11-24 22:02:10.508 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:02:10 compute-0 nova_compute[189608]: 2025-11-24 22:02:10.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:02:10 compute-0 nova_compute[189608]: 2025-11-24 22:02:10.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:02:10 compute-0 nova_compute[189608]: 2025-11-24 22:02:10.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:02:11 compute-0 nova_compute[189608]: 2025-11-24 22:02:11.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:02:11 compute-0 nova_compute[189608]: 2025-11-24 22:02:11.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:02:14 compute-0 podman[238965]: 2025-11-24 22:02:14.797043663 +0000 UTC m=+0.096022987 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.618 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.618 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.621 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.623 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.623 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.625 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.625 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.628 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.630 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.630 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.630 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.632 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.633 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.633 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.634 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.634 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.635 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b1b8f0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.635 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.636 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.636 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.637 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.637 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.637 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:02:17.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:02:21 compute-0 podman[238991]: 2025-11-24 22:02:21.52979299 +0000 UTC m=+0.086829464 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 22:02:25 compute-0 podman[239010]: 2025-11-24 22:02:25.521593331 +0000 UTC m=+0.082696647 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118)
Nov 24 22:02:26 compute-0 sshd-session[237310]: Received disconnect from 38.102.83.200 port 39782:11: disconnected by user
Nov 24 22:02:26 compute-0 sshd-session[237310]: Disconnected from user zuul 38.102.83.200 port 39782
Nov 24 22:02:26 compute-0 sshd-session[237307]: pam_unix(sshd:session): session closed for user zuul
Nov 24 22:02:26 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Nov 24 22:02:26 compute-0 systemd[1]: session-29.scope: Consumed 11.411s CPU time.
Nov 24 22:02:26 compute-0 systemd-logind[806]: Session 29 logged out. Waiting for processes to exit.
Nov 24 22:02:26 compute-0 systemd-logind[806]: Removed session 29.
Nov 24 22:02:29 compute-0 podman[203795]: time="2025-11-24T22:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:02:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:02:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4281 "" "Go-http-client/1.1"
Nov 24 22:02:30 compute-0 podman[239029]: 2025-11-24 22:02:30.569208643 +0000 UTC m=+0.120792600 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, distribution-scope=public, release-0.7.12=)
Nov 24 22:02:30 compute-0 podman[239030]: 2025-11-24 22:02:30.599289479 +0000 UTC m=+0.136059549 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Nov 24 22:02:30 compute-0 podman[239031]: 2025-11-24 22:02:30.613324371 +0000 UTC m=+0.146326066 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.build-date=20251118)
Nov 24 22:02:31 compute-0 openstack_network_exporter[205945]: ERROR   22:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:02:31 compute-0 openstack_network_exporter[205945]: ERROR   22:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:02:31 compute-0 openstack_network_exporter[205945]: ERROR   22:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:02:31 compute-0 openstack_network_exporter[205945]: ERROR   22:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:02:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:02:31 compute-0 openstack_network_exporter[205945]: ERROR   22:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:02:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:02:35 compute-0 podman[239085]: 2025-11-24 22:02:35.528317489 +0000 UTC m=+0.083843112 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:02:39 compute-0 podman[239109]: 2025-11-24 22:02:39.552831458 +0000 UTC m=+0.101176796 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 22:02:39 compute-0 podman[239108]: 2025-11-24 22:02:39.595077398 +0000 UTC m=+0.149881375 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 22:02:45 compute-0 podman[239147]: 2025-11-24 22:02:45.538439715 +0000 UTC m=+0.097288755 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:02:45 compute-0 sshd-session[239157]: Invalid user solana from 193.32.162.145 port 33220
Nov 24 22:02:46 compute-0 sshd-session[239157]: Connection closed by invalid user solana 193.32.162.145 port 33220 [preauth]
Nov 24 22:02:52 compute-0 podman[239174]: 2025-11-24 22:02:52.565292465 +0000 UTC m=+0.114882527 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:02:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:02:54.556 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:02:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:02:54.557 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:02:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:02:54.558 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:02:56 compute-0 podman[239194]: 2025-11-24 22:02:56.560663886 +0000 UTC m=+0.113548117 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm)
Nov 24 22:02:59 compute-0 podman[203795]: time="2025-11-24T22:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:02:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:02:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4285 "" "Go-http-client/1.1"
Nov 24 22:03:01 compute-0 openstack_network_exporter[205945]: ERROR   22:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:03:01 compute-0 openstack_network_exporter[205945]: ERROR   22:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:03:01 compute-0 openstack_network_exporter[205945]: ERROR   22:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:03:01 compute-0 openstack_network_exporter[205945]: ERROR   22:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:03:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:03:01 compute-0 openstack_network_exporter[205945]: ERROR   22:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:03:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:03:01 compute-0 podman[239212]: 2025-11-24 22:03:01.5372284 +0000 UTC m=+0.096786119 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release-0.7.12=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.expose-services=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.openshift.tags=base rhel9, vcs-type=git)
Nov 24 22:03:01 compute-0 podman[239213]: 2025-11-24 22:03:01.544066561 +0000 UTC m=+0.089220957 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, config_id=edpm, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Nov 24 22:03:01 compute-0 podman[239214]: 2025-11-24 22:03:01.556173553 +0000 UTC m=+0.111085589 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 24 22:03:05 compute-0 nova_compute[189608]: 2025-11-24 22:03:05.790 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:03:06 compute-0 podman[239268]: 2025-11-24 22:03:06.545248553 +0000 UTC m=+0.100447022 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:03:07 compute-0 nova_compute[189608]: 2025-11-24 22:03:07.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:03:09 compute-0 nova_compute[189608]: 2025-11-24 22:03:09.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:03:09 compute-0 nova_compute[189608]: 2025-11-24 22:03:09.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:03:09 compute-0 nova_compute[189608]: 2025-11-24 22:03:09.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:03:09 compute-0 nova_compute[189608]: 2025-11-24 22:03:09.851 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:03:09 compute-0 nova_compute[189608]: 2025-11-24 22:03:09.852 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:03:09 compute-0 nova_compute[189608]: 2025-11-24 22:03:09.852 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:03:09 compute-0 nova_compute[189608]: 2025-11-24 22:03:09.853 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:03:10 compute-0 nova_compute[189608]: 2025-11-24 22:03:10.302 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:03:10 compute-0 nova_compute[189608]: 2025-11-24 22:03:10.304 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5705MB free_disk=72.2598762512207GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:03:10 compute-0 nova_compute[189608]: 2025-11-24 22:03:10.305 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:03:10 compute-0 nova_compute[189608]: 2025-11-24 22:03:10.305 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:03:10 compute-0 nova_compute[189608]: 2025-11-24 22:03:10.375 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:03:10 compute-0 nova_compute[189608]: 2025-11-24 22:03:10.376 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:03:10 compute-0 nova_compute[189608]: 2025-11-24 22:03:10.403 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:03:10 compute-0 nova_compute[189608]: 2025-11-24 22:03:10.418 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:03:10 compute-0 nova_compute[189608]: 2025-11-24 22:03:10.420 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:03:10 compute-0 nova_compute[189608]: 2025-11-24 22:03:10.421 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:03:10 compute-0 podman[239291]: 2025-11-24 22:03:10.55771395 +0000 UTC m=+0.104396445 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 22:03:10 compute-0 podman[239290]: 2025-11-24 22:03:10.571630149 +0000 UTC m=+0.127970490 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:03:11 compute-0 nova_compute[189608]: 2025-11-24 22:03:11.421 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:03:11 compute-0 nova_compute[189608]: 2025-11-24 22:03:11.422 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:03:11 compute-0 nova_compute[189608]: 2025-11-24 22:03:11.422 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:03:11 compute-0 nova_compute[189608]: 2025-11-24 22:03:11.441 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:03:11 compute-0 nova_compute[189608]: 2025-11-24 22:03:11.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:03:12 compute-0 nova_compute[189608]: 2025-11-24 22:03:12.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:03:12 compute-0 nova_compute[189608]: 2025-11-24 22:03:12.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:03:12 compute-0 nova_compute[189608]: 2025-11-24 22:03:12.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:03:12 compute-0 nova_compute[189608]: 2025-11-24 22:03:12.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:03:16 compute-0 podman[239335]: 2025-11-24 22:03:16.590050245 +0000 UTC m=+0.138862376 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:03:23 compute-0 podman[239359]: 2025-11-24 22:03:23.599606063 +0000 UTC m=+0.145675677 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:03:27 compute-0 podman[239379]: 2025-11-24 22:03:27.565977788 +0000 UTC m=+0.114210871 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:03:29 compute-0 podman[203795]: time="2025-11-24T22:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:03:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:03:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4283 "" "Go-http-client/1.1"
Nov 24 22:03:31 compute-0 openstack_network_exporter[205945]: ERROR   22:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:03:31 compute-0 openstack_network_exporter[205945]: ERROR   22:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:03:31 compute-0 openstack_network_exporter[205945]: ERROR   22:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:03:31 compute-0 openstack_network_exporter[205945]: ERROR   22:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:03:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:03:31 compute-0 openstack_network_exporter[205945]: ERROR   22:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:03:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:03:32 compute-0 podman[239400]: 2025-11-24 22:03:32.55919619 +0000 UTC m=+0.098344167 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-type=git)
Nov 24 22:03:32 compute-0 podman[239399]: 2025-11-24 22:03:32.563897595 +0000 UTC m=+0.121350021 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, container_name=kepler, config_id=edpm, release-0.7.12=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.29.0)
Nov 24 22:03:32 compute-0 podman[239404]: 2025-11-24 22:03:32.589110999 +0000 UTC m=+0.122149606 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 22:03:35 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:03:35.093 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:03:35 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:03:35.095 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:03:35 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:03:35.097 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:03:37 compute-0 podman[239456]: 2025-11-24 22:03:37.546452308 +0000 UTC m=+0.094401225 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:03:41 compute-0 podman[239481]: 2025-11-24 22:03:41.585248702 +0000 UTC m=+0.138401682 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 22:03:41 compute-0 podman[239480]: 2025-11-24 22:03:41.64759946 +0000 UTC m=+0.196077964 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 22:03:44 compute-0 sshd-session[239524]: Invalid user ubuntu from 45.148.10.240 port 35594
Nov 24 22:03:44 compute-0 sshd-session[239524]: Connection closed by invalid user ubuntu 45.148.10.240 port 35594 [preauth]
Nov 24 22:03:47 compute-0 podman[239526]: 2025-11-24 22:03:47.573545498 +0000 UTC m=+0.113270141 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:03:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:03:54.557 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:03:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:03:54.559 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:03:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:03:54.559 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:03:54 compute-0 podman[239552]: 2025-11-24 22:03:54.579930939 +0000 UTC m=+0.112676911 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:03:58 compute-0 podman[239572]: 2025-11-24 22:03:58.545770417 +0000 UTC m=+0.104990594 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm)
Nov 24 22:03:59 compute-0 podman[203795]: time="2025-11-24T22:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:03:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:03:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4290 "" "Go-http-client/1.1"
Nov 24 22:04:01 compute-0 openstack_network_exporter[205945]: ERROR   22:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:04:01 compute-0 openstack_network_exporter[205945]: ERROR   22:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:04:01 compute-0 openstack_network_exporter[205945]: ERROR   22:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:04:01 compute-0 openstack_network_exporter[205945]: ERROR   22:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:04:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:04:01 compute-0 openstack_network_exporter[205945]: ERROR   22:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:04:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:04:03 compute-0 podman[239592]: 2025-11-24 22:04:03.530310339 +0000 UTC m=+0.084235639 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc.)
Nov 24 22:04:03 compute-0 podman[239594]: 2025-11-24 22:04:03.5448107 +0000 UTC m=+0.086004654 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 24 22:04:03 compute-0 podman[239593]: 2025-11-24 22:04:03.54966624 +0000 UTC m=+0.088900283 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, release=1755695350, version=9.6, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter)
Nov 24 22:04:08 compute-0 podman[239652]: 2025-11-24 22:04:08.537799926 +0000 UTC m=+0.094200779 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:04:09 compute-0 nova_compute[189608]: 2025-11-24 22:04:09.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:04:09 compute-0 nova_compute[189608]: 2025-11-24 22:04:09.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:04:09 compute-0 nova_compute[189608]: 2025-11-24 22:04:09.820 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:09 compute-0 nova_compute[189608]: 2025-11-24 22:04:09.821 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:09 compute-0 nova_compute[189608]: 2025-11-24 22:04:09.821 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:09 compute-0 nova_compute[189608]: 2025-11-24 22:04:09.821 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:04:10 compute-0 nova_compute[189608]: 2025-11-24 22:04:10.312 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:04:10 compute-0 nova_compute[189608]: 2025-11-24 22:04:10.314 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5680MB free_disk=72.26167297363281GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:04:10 compute-0 nova_compute[189608]: 2025-11-24 22:04:10.314 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:10 compute-0 nova_compute[189608]: 2025-11-24 22:04:10.314 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:10 compute-0 nova_compute[189608]: 2025-11-24 22:04:10.377 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:04:10 compute-0 nova_compute[189608]: 2025-11-24 22:04:10.378 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:04:10 compute-0 nova_compute[189608]: 2025-11-24 22:04:10.410 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:04:10 compute-0 nova_compute[189608]: 2025-11-24 22:04:10.428 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:04:10 compute-0 nova_compute[189608]: 2025-11-24 22:04:10.430 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:04:10 compute-0 nova_compute[189608]: 2025-11-24 22:04:10.430 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:11 compute-0 nova_compute[189608]: 2025-11-24 22:04:11.430 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:04:11 compute-0 nova_compute[189608]: 2025-11-24 22:04:11.431 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:04:11 compute-0 nova_compute[189608]: 2025-11-24 22:04:11.431 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:04:11 compute-0 nova_compute[189608]: 2025-11-24 22:04:11.449 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:04:11 compute-0 nova_compute[189608]: 2025-11-24 22:04:11.449 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:04:11 compute-0 nova_compute[189608]: 2025-11-24 22:04:11.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:04:12 compute-0 podman[239678]: 2025-11-24 22:04:12.563528135 +0000 UTC m=+0.103483117 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 24 22:04:12 compute-0 podman[239677]: 2025-11-24 22:04:12.622078234 +0000 UTC m=+0.167935220 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 24 22:04:12 compute-0 nova_compute[189608]: 2025-11-24 22:04:12.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:04:13 compute-0 nova_compute[189608]: 2025-11-24 22:04:13.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:04:13 compute-0 nova_compute[189608]: 2025-11-24 22:04:13.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:04:14 compute-0 nova_compute[189608]: 2025-11-24 22:04:14.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:04:14 compute-0 nova_compute[189608]: 2025-11-24 22:04:14.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.619 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.619 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.619 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.620 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.623 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.625 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.625 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.629 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.630 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.630 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.631 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.631 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.632 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.632 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.633 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.633 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.633 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.634 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.634 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.635 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.635 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.636 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.636 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.637 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.637 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b03860>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.639 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.639 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.640 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.643 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.643 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.643 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.643 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.643 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.643 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:04:17.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:04:18 compute-0 podman[239720]: 2025-11-24 22:04:18.585807916 +0000 UTC m=+0.135786461 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:04:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:24.804 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:04:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:24.806 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:04:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:24.807 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:04:25 compute-0 podman[239744]: 2025-11-24 22:04:25.587064738 +0000 UTC m=+0.137581476 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 22:04:29 compute-0 podman[239762]: 2025-11-24 22:04:29.517285119 +0000 UTC m=+0.077439368 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 24 22:04:29 compute-0 podman[203795]: time="2025-11-24T22:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:04:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:04:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4282 "" "Go-http-client/1.1"
Nov 24 22:04:31 compute-0 openstack_network_exporter[205945]: ERROR   22:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:04:31 compute-0 openstack_network_exporter[205945]: ERROR   22:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:04:31 compute-0 openstack_network_exporter[205945]: ERROR   22:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:04:31 compute-0 openstack_network_exporter[205945]: ERROR   22:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:04:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:04:31 compute-0 openstack_network_exporter[205945]: ERROR   22:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:04:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:04:34 compute-0 podman[239782]: 2025-11-24 22:04:34.545724087 +0000 UTC m=+0.093867429 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 22:04:34 compute-0 podman[239783]: 2025-11-24 22:04:34.56355335 +0000 UTC m=+0.099078439 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 24 22:04:34 compute-0 podman[239781]: 2025-11-24 22:04:34.564076067 +0000 UTC m=+0.115848771 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 24 22:04:35 compute-0 nova_compute[189608]: 2025-11-24 22:04:35.869 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "ea741b45-c6b4-41c0-a70f-c752b616faa2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:35 compute-0 nova_compute[189608]: 2025-11-24 22:04:35.870 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:35 compute-0 nova_compute[189608]: 2025-11-24 22:04:35.913 189613 DEBUG nova.compute.manager [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.044 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.045 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.059 189613 DEBUG nova.virt.hardware [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.060 189613 INFO nova.compute.claims [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.197 189613 DEBUG nova.compute.provider_tree [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.210 189613 DEBUG nova.scheduler.client.report [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.231 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.231 189613 DEBUG nova.compute.manager [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.277 189613 DEBUG nova.compute.manager [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.277 189613 DEBUG nova.network.neutron [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.302 189613 INFO nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.368 189613 DEBUG nova.compute.manager [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.474 189613 DEBUG nova.compute.manager [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.476 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.476 189613 INFO nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Creating image(s)
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.477 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "/var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.478 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.479 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.479 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:36 compute-0 nova_compute[189608]: 2025-11-24 22:04:36.481 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:37 compute-0 nova_compute[189608]: 2025-11-24 22:04:37.913 189613 WARNING oslo_policy.policy [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 24 22:04:37 compute-0 nova_compute[189608]: 2025-11-24 22:04:37.913 189613 WARNING oslo_policy.policy [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 24 22:04:38 compute-0 nova_compute[189608]: 2025-11-24 22:04:38.535 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:38 compute-0 nova_compute[189608]: 2025-11-24 22:04:38.624 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed.part --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:38 compute-0 nova_compute[189608]: 2025-11-24 22:04:38.626 189613 DEBUG nova.virt.images [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] a63b9561-12dc-4c11-858f-aa6fafbed036 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 24 22:04:38 compute-0 nova_compute[189608]: 2025-11-24 22:04:38.628 189613 DEBUG nova.privsep.utils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 24 22:04:38 compute-0 nova_compute[189608]: 2025-11-24 22:04:38.630 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed.part /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:38 compute-0 nova_compute[189608]: 2025-11-24 22:04:38.875 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed.part /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed.converted" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:38 compute-0 nova_compute[189608]: 2025-11-24 22:04:38.883 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:38 compute-0 nova_compute[189608]: 2025-11-24 22:04:38.976 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed.converted --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:38 compute-0 nova_compute[189608]: 2025-11-24 22:04:38.978 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:38 compute-0 nova_compute[189608]: 2025-11-24 22:04:38.995 189613 INFO oslo.privsep.daemon [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpa0u9j_uf/privsep.sock']
Nov 24 22:04:39 compute-0 nova_compute[189608]: 2025-11-24 22:04:39.008 189613 DEBUG nova.network.neutron [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Successfully created port: 5430cfcb-550b-4518-9caa-0720f99730b9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 22:04:39 compute-0 podman[239854]: 2025-11-24 22:04:39.576976132 +0000 UTC m=+0.129192426 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:04:39 compute-0 nova_compute[189608]: 2025-11-24 22:04:39.755 189613 INFO oslo.privsep.daemon [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Spawned new privsep daemon via rootwrap
Nov 24 22:04:39 compute-0 nova_compute[189608]: 2025-11-24 22:04:39.633 239876 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 22:04:39 compute-0 nova_compute[189608]: 2025-11-24 22:04:39.640 239876 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 22:04:39 compute-0 nova_compute[189608]: 2025-11-24 22:04:39.644 239876 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 24 22:04:39 compute-0 nova_compute[189608]: 2025-11-24 22:04:39.645 239876 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239876
Nov 24 22:04:39 compute-0 nova_compute[189608]: 2025-11-24 22:04:39.834 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:39 compute-0 nova_compute[189608]: 2025-11-24 22:04:39.901 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:39 compute-0 nova_compute[189608]: 2025-11-24 22:04:39.903 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:39 compute-0 nova_compute[189608]: 2025-11-24 22:04:39.905 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:39 compute-0 nova_compute[189608]: 2025-11-24 22:04:39.928 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.007 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.009 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed,backing_fmt=raw /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.056 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed,backing_fmt=raw /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.058 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.059 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.131 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.132 189613 DEBUG nova.virt.disk.api [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Checking if we can resize image /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.133 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.188 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.190 189613 DEBUG nova.virt.disk.api [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Cannot resize image /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.191 189613 DEBUG nova.objects.instance [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'migration_context' on Instance uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.209 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "/var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.210 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.212 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.213 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.214 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.215 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.254 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.255 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.320 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.322 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.351 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.448 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.450 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.452 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.479 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.552 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.554 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.610 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 1073741824" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.611 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.612 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.665 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.666 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.666 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Ensure instance console log exists: /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.666 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.666 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:40 compute-0 nova_compute[189608]: 2025-11-24 22:04:40.667 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:41 compute-0 nova_compute[189608]: 2025-11-24 22:04:41.001 189613 DEBUG nova.network.neutron [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Successfully updated port: 5430cfcb-550b-4518-9caa-0720f99730b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:04:41 compute-0 nova_compute[189608]: 2025-11-24 22:04:41.025 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:04:41 compute-0 nova_compute[189608]: 2025-11-24 22:04:41.026 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquired lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:04:41 compute-0 nova_compute[189608]: 2025-11-24 22:04:41.026 189613 DEBUG nova.network.neutron [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:04:41 compute-0 nova_compute[189608]: 2025-11-24 22:04:41.244 189613 DEBUG nova.network.neutron [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:04:41 compute-0 nova_compute[189608]: 2025-11-24 22:04:41.545 189613 DEBUG nova.compute.manager [req-c89dd907-21f6-424d-b72d-e1c3bb654235 req-0c23ac2f-04a6-4326-9222-ff6a46056b80 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Received event network-changed-5430cfcb-550b-4518-9caa-0720f99730b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:04:41 compute-0 nova_compute[189608]: 2025-11-24 22:04:41.546 189613 DEBUG nova.compute.manager [req-c89dd907-21f6-424d-b72d-e1c3bb654235 req-0c23ac2f-04a6-4326-9222-ff6a46056b80 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Refreshing instance network info cache due to event network-changed-5430cfcb-550b-4518-9caa-0720f99730b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:04:41 compute-0 nova_compute[189608]: 2025-11-24 22:04:41.547 189613 DEBUG oslo_concurrency.lockutils [req-c89dd907-21f6-424d-b72d-e1c3bb654235 req-0c23ac2f-04a6-4326-9222-ff6a46056b80 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.444 189613 DEBUG nova.network.neutron [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updating instance_info_cache with network_info: [{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.479 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Releasing lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.480 189613 DEBUG nova.compute.manager [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Instance network_info: |[{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.480 189613 DEBUG oslo_concurrency.lockutils [req-c89dd907-21f6-424d-b72d-e1c3bb654235 req-0c23ac2f-04a6-4326-9222-ff6a46056b80 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.481 189613 DEBUG nova.network.neutron [req-c89dd907-21f6-424d-b72d-e1c3bb654235 req-0c23ac2f-04a6-4326-9222-ff6a46056b80 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Refreshing network info cache for port 5430cfcb-550b-4518-9caa-0720f99730b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.485 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Start _get_guest_xml network_info=[{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-24T22:03:25Z,direct_url=<?>,disk_format='qcow2',id=a63b9561-12dc-4c11-858f-aa6fafbed036,min_disk=0,min_ram=0,name='cirros',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-24T22:03:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}], 'ephemerals': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 1, 'encryption_format': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.495 189613 WARNING nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.508 189613 DEBUG nova.virt.libvirt.host [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.509 189613 DEBUG nova.virt.libvirt.host [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.514 189613 DEBUG nova.virt.libvirt.host [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.515 189613 DEBUG nova.virt.libvirt.host [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.516 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.516 189613 DEBUG nova.virt.hardware [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:03:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-24T22:03:25Z,direct_url=<?>,disk_format='qcow2',id=a63b9561-12dc-4c11-858f-aa6fafbed036,min_disk=0,min_ram=0,name='cirros',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-24T22:03:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.517 189613 DEBUG nova.virt.hardware [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.517 189613 DEBUG nova.virt.hardware [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.518 189613 DEBUG nova.virt.hardware [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.518 189613 DEBUG nova.virt.hardware [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.519 189613 DEBUG nova.virt.hardware [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.520 189613 DEBUG nova.virt.hardware [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.521 189613 DEBUG nova.virt.hardware [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.521 189613 DEBUG nova.virt.hardware [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.522 189613 DEBUG nova.virt.hardware [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.523 189613 DEBUG nova.virt.hardware [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.530 189613 DEBUG nova.privsep.utils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.532 189613 DEBUG nova.virt.libvirt.vif [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:04:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='309342b7e3e849b2a5dd56651d8fa068',ramdisk_id='',reservation_id='r-ju5wfah9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:04:36Z,user_data=None,user_id='572aaac113f54af8a894707849aed6bf',uuid=ea741b45-c6b4-41c0-a70f-c752b616faa2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.533 189613 DEBUG nova.network.os_vif_util [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converting VIF {"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.535 189613 DEBUG nova.network.os_vif_util [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:21:ae,bridge_name='br-int',has_traffic_filtering=True,id=5430cfcb-550b-4518-9caa-0720f99730b9,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5430cfcb-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.538 189613 DEBUG nova.objects.instance [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'pci_devices' on Instance uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.560 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:04:42 compute-0 nova_compute[189608]:   <uuid>ea741b45-c6b4-41c0-a70f-c752b616faa2</uuid>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   <name>instance-00000001</name>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   <memory>524288</memory>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <nova:name>test_0</nova:name>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:04:42</nova:creationTime>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <nova:flavor name="m1.small">
Nov 24 22:04:42 compute-0 nova_compute[189608]:         <nova:memory>512</nova:memory>
Nov 24 22:04:42 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:04:42 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:04:42 compute-0 nova_compute[189608]:         <nova:ephemeral>1</nova:ephemeral>
Nov 24 22:04:42 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:04:42 compute-0 nova_compute[189608]:         <nova:user uuid="572aaac113f54af8a894707849aed6bf">admin</nova:user>
Nov 24 22:04:42 compute-0 nova_compute[189608]:         <nova:project uuid="309342b7e3e849b2a5dd56651d8fa068">admin</nova:project>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="a63b9561-12dc-4c11-858f-aa6fafbed036"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:04:42 compute-0 nova_compute[189608]:         <nova:port uuid="5430cfcb-550b-4518-9caa-0720f99730b9">
Nov 24 22:04:42 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="192.168.0.169" ipVersion="4"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <system>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <entry name="serial">ea741b45-c6b4-41c0-a70f-c752b616faa2</entry>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <entry name="uuid">ea741b45-c6b4-41c0-a70f-c752b616faa2</entry>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     </system>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   <os>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   </os>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   <features>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   </features>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <target dev="vdb" bus="virtio"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.config"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:85:21:ae"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <target dev="tap5430cfcb-55"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/console.log" append="off"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <video>
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     </video>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:04:42 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:04:42 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:04:42 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:04:42 compute-0 nova_compute[189608]: </domain>
Nov 24 22:04:42 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.562 189613 DEBUG nova.compute.manager [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Preparing to wait for external event network-vif-plugged-5430cfcb-550b-4518-9caa-0720f99730b9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.562 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.563 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.563 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.563 189613 DEBUG nova.virt.libvirt.vif [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:04:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='309342b7e3e849b2a5dd56651d8fa068',ramdisk_id='',reservation_id='r-ju5wfah9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:04:36Z,user_data=None,user_id='572aaac113f54af8a894707849aed6bf',uuid=ea741b45-c6b4-41c0-a70f-c752b616faa2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.564 189613 DEBUG nova.network.os_vif_util [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converting VIF {"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.564 189613 DEBUG nova.network.os_vif_util [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:21:ae,bridge_name='br-int',has_traffic_filtering=True,id=5430cfcb-550b-4518-9caa-0720f99730b9,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5430cfcb-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.569 189613 DEBUG os_vif [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:21:ae,bridge_name='br-int',has_traffic_filtering=True,id=5430cfcb-550b-4518-9caa-0720f99730b9,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5430cfcb-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.604 189613 DEBUG ovsdbapp.backend.ovs_idl [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.604 189613 DEBUG ovsdbapp.backend.ovs_idl [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.604 189613 DEBUG ovsdbapp.backend.ovs_idl [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.605 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.606 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.606 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.607 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.608 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.611 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.621 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.621 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.622 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:04:42 compute-0 nova_compute[189608]: 2025-11-24 22:04:42.623 189613 INFO oslo.privsep.daemon [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmph3gw9l5a/privsep.sock']
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.041 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.371 189613 INFO oslo.privsep.daemon [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Spawned new privsep daemon via rootwrap
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.260 239913 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.267 239913 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.271 239913 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.271 239913 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239913
Nov 24 22:04:43 compute-0 podman[239916]: 2025-11-24 22:04:43.573243284 +0000 UTC m=+0.114957943 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 24 22:04:43 compute-0 podman[239915]: 2025-11-24 22:04:43.644475558 +0000 UTC m=+0.197910771 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.668 189613 DEBUG nova.network.neutron [req-c89dd907-21f6-424d-b72d-e1c3bb654235 req-0c23ac2f-04a6-4326-9222-ff6a46056b80 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updated VIF entry in instance network info cache for port 5430cfcb-550b-4518-9caa-0720f99730b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.669 189613 DEBUG nova.network.neutron [req-c89dd907-21f6-424d-b72d-e1c3bb654235 req-0c23ac2f-04a6-4326-9222-ff6a46056b80 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updating instance_info_cache with network_info: [{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.684 189613 DEBUG oslo_concurrency.lockutils [req-c89dd907-21f6-424d-b72d-e1c3bb654235 req-0c23ac2f-04a6-4326-9222-ff6a46056b80 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.699 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.700 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5430cfcb-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.701 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5430cfcb-55, col_values=(('external_ids', {'iface-id': '5430cfcb-550b-4518-9caa-0720f99730b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:21:ae', 'vm-uuid': 'ea741b45-c6b4-41c0-a70f-c752b616faa2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.705 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:43 compute-0 NetworkManager[56413]: <info>  [1764021883.7075] manager: (tap5430cfcb-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.709 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.716 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.717 189613 INFO os_vif [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:21:ae,bridge_name='br-int',has_traffic_filtering=True,id=5430cfcb-550b-4518-9caa-0720f99730b9,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5430cfcb-55')
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.785 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.786 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.787 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.787 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No VIF found with MAC fa:16:3e:85:21:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:04:43 compute-0 nova_compute[189608]: 2025-11-24 22:04:43.788 189613 INFO nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Using config drive
Nov 24 22:04:46 compute-0 nova_compute[189608]: 2025-11-24 22:04:46.057 189613 INFO nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Creating config drive at /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.config
Nov 24 22:04:46 compute-0 nova_compute[189608]: 2025-11-24 22:04:46.068 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zfkpbll execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:04:46 compute-0 nova_compute[189608]: 2025-11-24 22:04:46.218 189613 DEBUG oslo_concurrency.processutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zfkpbll" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:04:46 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 24 22:04:46 compute-0 NetworkManager[56413]: <info>  [1764021886.3564] manager: (tap5430cfcb-55): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Nov 24 22:04:46 compute-0 kernel: tap5430cfcb-55: entered promiscuous mode
Nov 24 22:04:46 compute-0 ovn_controller[97889]: 2025-11-24T22:04:46Z|00027|binding|INFO|Claiming lport 5430cfcb-550b-4518-9caa-0720f99730b9 for this chassis.
Nov 24 22:04:46 compute-0 ovn_controller[97889]: 2025-11-24T22:04:46Z|00028|binding|INFO|5430cfcb-550b-4518-9caa-0720f99730b9: Claiming fa:16:3e:85:21:ae 192.168.0.169
Nov 24 22:04:46 compute-0 nova_compute[189608]: 2025-11-24 22:04:46.360 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:46 compute-0 nova_compute[189608]: 2025-11-24 22:04:46.371 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:46 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:46.380 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:21:ae 192.168.0.169'], port_security=['fa:16:3e:85:21:ae 192.168.0.169'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.169/24', 'neutron:device_id': 'ea741b45-c6b4-41c0-a70f-c752b616faa2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '309342b7e3e849b2a5dd56651d8fa068', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c15160ba-5e91-4c5e-8c90-4266712c07a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b82cef4e-b720-4309-9703-51bb202fedba, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=5430cfcb-550b-4518-9caa-0720f99730b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:04:46 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:46.382 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 5430cfcb-550b-4518-9caa-0720f99730b9 in datapath 1d1b3625-954d-4d8b-8b3f-323c25d9b42a bound to our chassis
Nov 24 22:04:46 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:46.386 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d1b3625-954d-4d8b-8b3f-323c25d9b42a
Nov 24 22:04:46 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:46.388 106776 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpu4034ikq/privsep.sock']
Nov 24 22:04:46 compute-0 systemd-udevd[239987]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:04:46 compute-0 NetworkManager[56413]: <info>  [1764021886.4632] device (tap5430cfcb-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:04:46 compute-0 NetworkManager[56413]: <info>  [1764021886.4640] device (tap5430cfcb-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:04:46 compute-0 nova_compute[189608]: 2025-11-24 22:04:46.483 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:46 compute-0 ovn_controller[97889]: 2025-11-24T22:04:46Z|00029|binding|INFO|Setting lport 5430cfcb-550b-4518-9caa-0720f99730b9 ovn-installed in OVS
Nov 24 22:04:46 compute-0 ovn_controller[97889]: 2025-11-24T22:04:46Z|00030|binding|INFO|Setting lport 5430cfcb-550b-4518-9caa-0720f99730b9 up in Southbound
Nov 24 22:04:46 compute-0 nova_compute[189608]: 2025-11-24 22:04:46.492 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:46 compute-0 systemd-machined[155884]: New machine qemu-1-instance-00000001.
Nov 24 22:04:46 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 24 22:04:46 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 24 22:04:46 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 24 22:04:47 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:47.116 106776 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 22:04:47 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:47.117 106776 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpu4034ikq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 22:04:47 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:46.957 240020 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 22:04:47 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:46.964 240020 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 22:04:47 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:46.968 240020 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 24 22:04:47 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:46.968 240020 INFO oslo.privsep.daemon [-] privsep daemon running as pid 240020
Nov 24 22:04:47 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:47.122 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[2aac2329-0989-4115-a57a-ebc2577c8d0f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.173 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764021887.1726973, ea741b45-c6b4-41c0-a70f-c752b616faa2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.174 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] VM Started (Lifecycle Event)
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.219 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.230 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764021887.1728623, ea741b45-c6b4-41c0-a70f-c752b616faa2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.231 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] VM Paused (Lifecycle Event)
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.263 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.272 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.298 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.427 189613 DEBUG nova.compute.manager [req-839148d5-4a81-4d28-970c-f519c8f9f9f3 req-b93bc7ae-65e1-4c21-ad48-8889efb42a25 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Received event network-vif-plugged-5430cfcb-550b-4518-9caa-0720f99730b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.427 189613 DEBUG oslo_concurrency.lockutils [req-839148d5-4a81-4d28-970c-f519c8f9f9f3 req-b93bc7ae-65e1-4c21-ad48-8889efb42a25 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.428 189613 DEBUG oslo_concurrency.lockutils [req-839148d5-4a81-4d28-970c-f519c8f9f9f3 req-b93bc7ae-65e1-4c21-ad48-8889efb42a25 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.429 189613 DEBUG oslo_concurrency.lockutils [req-839148d5-4a81-4d28-970c-f519c8f9f9f3 req-b93bc7ae-65e1-4c21-ad48-8889efb42a25 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.429 189613 DEBUG nova.compute.manager [req-839148d5-4a81-4d28-970c-f519c8f9f9f3 req-b93bc7ae-65e1-4c21-ad48-8889efb42a25 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Processing event network-vif-plugged-5430cfcb-550b-4518-9caa-0720f99730b9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.430 189613 DEBUG nova.compute.manager [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.436 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764021887.4363346, ea741b45-c6b4-41c0-a70f-c752b616faa2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.438 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] VM Resumed (Lifecycle Event)
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.442 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.452 189613 INFO nova.virt.libvirt.driver [-] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Instance spawned successfully.
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.454 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.461 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.480 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.491 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.492 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.493 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.494 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.494 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.495 189613 DEBUG nova.virt.libvirt.driver [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.502 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.546 189613 INFO nova.compute.manager [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Took 11.07 seconds to spawn the instance on the hypervisor.
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.549 189613 DEBUG nova.compute.manager [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.621 189613 INFO nova.compute.manager [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Took 11.63 seconds to build instance.
Nov 24 22:04:47 compute-0 nova_compute[189608]: 2025-11-24 22:04:47.637 189613 DEBUG oslo_concurrency.lockutils [None req-1825ae85-37b7-4084-bff2-de482ef57f57 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:47 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:47.682 240020 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:47 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:47.682 240020 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:47 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:47.682 240020 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:48 compute-0 nova_compute[189608]: 2025-11-24 22:04:48.044 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:48 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:48.280 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6098ea-cc9a-4216-9ab2-40bb777be223]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:48 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:48.281 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1d1b3625-91 in ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 22:04:48 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:48.284 240020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1d1b3625-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 22:04:48 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:48.284 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[0342fb4d-3238-44bc-b76f-af99e444260a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:48 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:48.287 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[c983244c-7085-4748-92af-69d6c44a115a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:48 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:48.332 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d309c9-806d-4822-9008-76efa7e05ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:48 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:48.375 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd5069f-5a89-4b20-812b-211b6e21b85c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:48 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:48.379 106776 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpbuhc4d1p/privsep.sock']
Nov 24 22:04:48 compute-0 nova_compute[189608]: 2025-11-24 22:04:48.707 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:49 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:49.119 106776 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 22:04:49 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:49.121 106776 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpbuhc4d1p/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 22:04:49 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:48.990 240041 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 22:04:49 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:48.997 240041 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 22:04:49 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:49.000 240041 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 24 22:04:49 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:49.001 240041 INFO oslo.privsep.daemon [-] privsep daemon running as pid 240041
Nov 24 22:04:49 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:49.126 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c77016-0373-41d6-8da5-d496b1de42e1]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:49 compute-0 nova_compute[189608]: 2025-11-24 22:04:49.535 189613 DEBUG nova.compute.manager [req-560e5c1d-6545-42e4-82a0-632003d9e1e9 req-05bf1ef3-622d-4283-add1-e9ce80688976 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Received event network-vif-plugged-5430cfcb-550b-4518-9caa-0720f99730b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:04:49 compute-0 nova_compute[189608]: 2025-11-24 22:04:49.537 189613 DEBUG oslo_concurrency.lockutils [req-560e5c1d-6545-42e4-82a0-632003d9e1e9 req-05bf1ef3-622d-4283-add1-e9ce80688976 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:49 compute-0 nova_compute[189608]: 2025-11-24 22:04:49.538 189613 DEBUG oslo_concurrency.lockutils [req-560e5c1d-6545-42e4-82a0-632003d9e1e9 req-05bf1ef3-622d-4283-add1-e9ce80688976 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:49 compute-0 nova_compute[189608]: 2025-11-24 22:04:49.539 189613 DEBUG oslo_concurrency.lockutils [req-560e5c1d-6545-42e4-82a0-632003d9e1e9 req-05bf1ef3-622d-4283-add1-e9ce80688976 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:49 compute-0 nova_compute[189608]: 2025-11-24 22:04:49.540 189613 DEBUG nova.compute.manager [req-560e5c1d-6545-42e4-82a0-632003d9e1e9 req-05bf1ef3-622d-4283-add1-e9ce80688976 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] No waiting events found dispatching network-vif-plugged-5430cfcb-550b-4518-9caa-0720f99730b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:04:49 compute-0 nova_compute[189608]: 2025-11-24 22:04:49.541 189613 WARNING nova.compute.manager [req-560e5c1d-6545-42e4-82a0-632003d9e1e9 req-05bf1ef3-622d-4283-add1-e9ce80688976 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Received unexpected event network-vif-plugged-5430cfcb-550b-4518-9caa-0720f99730b9 for instance with vm_state active and task_state None.
Nov 24 22:04:49 compute-0 podman[240046]: 2025-11-24 22:04:49.559787665 +0000 UTC m=+0.107838492 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:04:49 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:49.662 240041 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:49 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:49.664 240041 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:49 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:49.664 240041 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.251 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[9e40aa6c-8643-4a5d-b842-76875d13dc2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:50 compute-0 NetworkManager[56413]: <info>  [1764021890.2851] manager: (tap1d1b3625-90): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.283 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[c9167508-0051-46e1-9bed-ecf5e4da4e81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.339 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc86e75-7ff5-4210-89cd-58d5d369032f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.349 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[52d848ee-70cf-48d1-bfee-948e5ab218b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:50 compute-0 systemd-udevd[240078]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:04:50 compute-0 NetworkManager[56413]: <info>  [1764021890.3929] device (tap1d1b3625-90): carrier: link connected
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.407 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[6375f320-9a33-435e-bbc7-ead121000658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.436 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[d36b8f28-e8e8-4fb8-9683-e2b156d8db9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d1b3625-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:e7:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373788, 'reachable_time': 19807, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240096, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.457 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[c0cb58c4-5ef0-4ea8-810c-d6b06f7392aa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:e7b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373788, 'tstamp': 373788}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240097, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.480 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[26f38fcc-a713-45e0-bd09-260f0aac535e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d1b3625-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:e7:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373788, 'reachable_time': 19807, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240098, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.518 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[01a2f2e3-3ed4-4158-9514-787af9c03d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.621 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[d928affa-5404-4ba2-982e-a929f2742c50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.624 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d1b3625-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.626 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.627 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d1b3625-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:04:50 compute-0 kernel: tap1d1b3625-90: entered promiscuous mode
Nov 24 22:04:50 compute-0 NetworkManager[56413]: <info>  [1764021890.6427] manager: (tap1d1b3625-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 24 22:04:50 compute-0 nova_compute[189608]: 2025-11-24 22:04:50.631 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.652 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d1b3625-90, col_values=(('external_ids', {'iface-id': '13073b7d-8165-42cd-87f4-fb1eb15a5b94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:04:50 compute-0 ovn_controller[97889]: 2025-11-24T22:04:50Z|00031|binding|INFO|Releasing lport 13073b7d-8165-42cd-87f4-fb1eb15a5b94 from this chassis (sb_readonly=0)
Nov 24 22:04:50 compute-0 nova_compute[189608]: 2025-11-24 22:04:50.655 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:50 compute-0 nova_compute[189608]: 2025-11-24 22:04:50.657 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.659 106776 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1d1b3625-954d-4d8b-8b3f-323c25d9b42a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1d1b3625-954d-4d8b-8b3f-323c25d9b42a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.660 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[54a214f6-a0cb-418b-9edd-a7e92d654a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.675 106776 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: global
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     log         /dev/log local0 debug
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     log-tag     haproxy-metadata-proxy-1d1b3625-954d-4d8b-8b3f-323c25d9b42a
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     user        root
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     group       root
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     maxconn     1024
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     pidfile     /var/lib/neutron/external/pids/1d1b3625-954d-4d8b-8b3f-323c25d9b42a.pid.haproxy
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     daemon
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: defaults
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     log global
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     mode http
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     option httplog
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     option dontlognull
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     option http-server-close
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     option forwardfor
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     retries                 3
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     timeout http-request    30s
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     timeout connect         30s
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     timeout client          32s
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     timeout server          32s
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     timeout http-keep-alive 30s
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: listen listener
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     bind 169.254.169.254:80
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:     http-request add-header X-OVN-Network-ID 1d1b3625-954d-4d8b-8b3f-323c25d9b42a
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 22:04:50 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:50.676 106776 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'env', 'PROCESS_TAG=haproxy-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1d1b3625-954d-4d8b-8b3f-323c25d9b42a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 22:04:50 compute-0 nova_compute[189608]: 2025-11-24 22:04:50.694 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:51 compute-0 podman[240129]: 2025-11-24 22:04:51.163574141 +0000 UTC m=+0.048718505 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 22:04:51 compute-0 podman[240129]: 2025-11-24 22:04:51.257478729 +0000 UTC m=+0.142623033 container create eb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 22:04:51 compute-0 systemd[1]: Started libpod-conmon-eb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215.scope.
Nov 24 22:04:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 22:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52f81f2f4cb79e326dabd416d85f75720cb228a555093335894afbeace8297d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 22:04:51 compute-0 podman[240129]: 2025-11-24 22:04:51.422908179 +0000 UTC m=+0.308052463 container init eb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 22:04:51 compute-0 podman[240129]: 2025-11-24 22:04:51.437781502 +0000 UTC m=+0.322925756 container start eb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:04:51 compute-0 neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a[240144]: [NOTICE]   (240148) : New worker (240150) forked
Nov 24 22:04:51 compute-0 neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a[240144]: [NOTICE]   (240148) : Loading success.
Nov 24 22:04:53 compute-0 nova_compute[189608]: 2025-11-24 22:04:53.045 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:53 compute-0 nova_compute[189608]: 2025-11-24 22:04:53.710 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:54.559 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:04:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:54.560 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:04:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:04:54.561 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:04:56 compute-0 podman[240159]: 2025-11-24 22:04:56.581523562 +0000 UTC m=+0.128848985 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 24 22:04:58 compute-0 nova_compute[189608]: 2025-11-24 22:04:58.049 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:58 compute-0 nova_compute[189608]: 2025-11-24 22:04:58.714 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:04:59 compute-0 podman[203795]: time="2025-11-24T22:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:04:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:04:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Nov 24 22:05:00 compute-0 podman[240178]: 2025-11-24 22:05:00.557214405 +0000 UTC m=+0.115665555 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 24 22:05:01 compute-0 openstack_network_exporter[205945]: ERROR   22:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:05:01 compute-0 openstack_network_exporter[205945]: ERROR   22:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:05:01 compute-0 openstack_network_exporter[205945]: ERROR   22:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:05:01 compute-0 openstack_network_exporter[205945]: ERROR   22:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:05:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:05:01 compute-0 openstack_network_exporter[205945]: ERROR   22:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:05:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:05:03 compute-0 nova_compute[189608]: 2025-11-24 22:05:03.053 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:03 compute-0 nova_compute[189608]: 2025-11-24 22:05:03.720 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:04 compute-0 nova_compute[189608]: 2025-11-24 22:05:04.888 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:04 compute-0 NetworkManager[56413]: <info>  [1764021904.8895] manager: (patch-provnet-4ab70a2e-e19a-4436-8557-6ae686d167d5-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Nov 24 22:05:04 compute-0 NetworkManager[56413]: <info>  [1764021904.8899] device (patch-provnet-4ab70a2e-e19a-4436-8557-6ae686d167d5-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 22:05:04 compute-0 NetworkManager[56413]: <info>  [1764021904.8908] manager: (patch-br-int-to-provnet-4ab70a2e-e19a-4436-8557-6ae686d167d5): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Nov 24 22:05:04 compute-0 NetworkManager[56413]: <info>  [1764021904.8910] device (patch-br-int-to-provnet-4ab70a2e-e19a-4436-8557-6ae686d167d5)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 22:05:04 compute-0 NetworkManager[56413]: <info>  [1764021904.8918] manager: (patch-br-int-to-provnet-4ab70a2e-e19a-4436-8557-6ae686d167d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Nov 24 22:05:04 compute-0 NetworkManager[56413]: <info>  [1764021904.8922] manager: (patch-provnet-4ab70a2e-e19a-4436-8557-6ae686d167d5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 24 22:05:04 compute-0 NetworkManager[56413]: <info>  [1764021904.8925] device (patch-provnet-4ab70a2e-e19a-4436-8557-6ae686d167d5-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 22:05:04 compute-0 NetworkManager[56413]: <info>  [1764021904.8927] device (patch-br-int-to-provnet-4ab70a2e-e19a-4436-8557-6ae686d167d5)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 22:05:04 compute-0 ovn_controller[97889]: 2025-11-24T22:05:04Z|00032|binding|INFO|Releasing lport 13073b7d-8165-42cd-87f4-fb1eb15a5b94 from this chassis (sb_readonly=0)
Nov 24 22:05:04 compute-0 ovn_controller[97889]: 2025-11-24T22:05:04Z|00033|binding|INFO|Releasing lport 13073b7d-8165-42cd-87f4-fb1eb15a5b94 from this chassis (sb_readonly=0)
Nov 24 22:05:04 compute-0 nova_compute[189608]: 2025-11-24 22:05:04.960 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:04 compute-0 nova_compute[189608]: 2025-11-24 22:05:04.972 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:05 compute-0 nova_compute[189608]: 2025-11-24 22:05:05.300 189613 DEBUG nova.compute.manager [req-395c1f4e-d87c-4851-a63d-a4d5daaed858 req-d80b6a7e-e198-43be-ab36-ba233ae735dc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Received event network-changed-5430cfcb-550b-4518-9caa-0720f99730b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:05:05 compute-0 nova_compute[189608]: 2025-11-24 22:05:05.300 189613 DEBUG nova.compute.manager [req-395c1f4e-d87c-4851-a63d-a4d5daaed858 req-d80b6a7e-e198-43be-ab36-ba233ae735dc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Refreshing instance network info cache due to event network-changed-5430cfcb-550b-4518-9caa-0720f99730b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:05:05 compute-0 nova_compute[189608]: 2025-11-24 22:05:05.301 189613 DEBUG oslo_concurrency.lockutils [req-395c1f4e-d87c-4851-a63d-a4d5daaed858 req-d80b6a7e-e198-43be-ab36-ba233ae735dc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:05:05 compute-0 nova_compute[189608]: 2025-11-24 22:05:05.302 189613 DEBUG oslo_concurrency.lockutils [req-395c1f4e-d87c-4851-a63d-a4d5daaed858 req-d80b6a7e-e198-43be-ab36-ba233ae735dc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:05:05 compute-0 nova_compute[189608]: 2025-11-24 22:05:05.302 189613 DEBUG nova.network.neutron [req-395c1f4e-d87c-4851-a63d-a4d5daaed858 req-d80b6a7e-e198-43be-ab36-ba233ae735dc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Refreshing network info cache for port 5430cfcb-550b-4518-9caa-0720f99730b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:05:05 compute-0 podman[240198]: 2025-11-24 22:05:05.552512963 +0000 UTC m=+0.108636327 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.buildah.version=1.29.0, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, architecture=x86_64, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=)
Nov 24 22:05:05 compute-0 podman[240200]: 2025-11-24 22:05:05.554073232 +0000 UTC m=+0.103337303 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 22:05:05 compute-0 podman[240199]: 2025-11-24 22:05:05.561291856 +0000 UTC m=+0.104405086 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1755695350, architecture=x86_64, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 24 22:05:05 compute-0 nova_compute[189608]: 2025-11-24 22:05:05.790 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:07 compute-0 nova_compute[189608]: 2025-11-24 22:05:07.564 189613 DEBUG nova.network.neutron [req-395c1f4e-d87c-4851-a63d-a4d5daaed858 req-d80b6a7e-e198-43be-ab36-ba233ae735dc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updated VIF entry in instance network info cache for port 5430cfcb-550b-4518-9caa-0720f99730b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:05:07 compute-0 nova_compute[189608]: 2025-11-24 22:05:07.566 189613 DEBUG nova.network.neutron [req-395c1f4e-d87c-4851-a63d-a4d5daaed858 req-d80b6a7e-e198-43be-ab36-ba233ae735dc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updating instance_info_cache with network_info: [{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:05:07 compute-0 nova_compute[189608]: 2025-11-24 22:05:07.585 189613 DEBUG oslo_concurrency.lockutils [req-395c1f4e-d87c-4851-a63d-a4d5daaed858 req-d80b6a7e-e198-43be-ab36-ba233ae735dc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:05:08 compute-0 nova_compute[189608]: 2025-11-24 22:05:08.057 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:08 compute-0 nova_compute[189608]: 2025-11-24 22:05:08.724 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:08 compute-0 nova_compute[189608]: 2025-11-24 22:05:08.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:08 compute-0 nova_compute[189608]: 2025-11-24 22:05:08.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 22:05:08 compute-0 nova_compute[189608]: 2025-11-24 22:05:08.809 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 22:05:09 compute-0 nova_compute[189608]: 2025-11-24 22:05:09.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:09 compute-0 nova_compute[189608]: 2025-11-24 22:05:09.795 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 22:05:10 compute-0 podman[240253]: 2025-11-24 22:05:10.546787028 +0000 UTC m=+0.102100493 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 24 22:05:10 compute-0 nova_compute[189608]: 2025-11-24 22:05:10.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:10 compute-0 nova_compute[189608]: 2025-11-24 22:05:10.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:05:10 compute-0 nova_compute[189608]: 2025-11-24 22:05:10.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:05:11 compute-0 nova_compute[189608]: 2025-11-24 22:05:11.881 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:05:11 compute-0 nova_compute[189608]: 2025-11-24 22:05:11.883 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:05:11 compute-0 nova_compute[189608]: 2025-11-24 22:05:11.884 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:05:11 compute-0 nova_compute[189608]: 2025-11-24 22:05:11.885 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:05:13 compute-0 nova_compute[189608]: 2025-11-24 22:05:13.060 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:13 compute-0 nova_compute[189608]: 2025-11-24 22:05:13.731 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.006 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updating instance_info_cache with network_info: [{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.032 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.033 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.034 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.035 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.036 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.037 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.056 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Triggering sync for uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.057 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "ea741b45-c6b4-41c0-a70f-c752b616faa2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.057 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.058 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.105 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.107 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.108 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.109 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.145 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.213 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:05:14 compute-0 podman[240279]: 2025-11-24 22:05:14.268251772 +0000 UTC m=+0.094555689 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.288 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.289 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:05:14 compute-0 podman[240278]: 2025-11-24 22:05:14.345814172 +0000 UTC m=+0.172948035 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.354 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.355 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.411 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.413 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.470 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.853 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.854 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5270MB free_disk=72.23012924194336GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.854 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:05:14 compute-0 nova_compute[189608]: 2025-11-24 22:05:14.855 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.199 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.199 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.199 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.282 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing inventories for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.371 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating ProviderTree inventory for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.371 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating inventory in ProviderTree for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.389 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing aggregate associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.414 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing trait associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, traits: HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.456 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating inventory in ProviderTree for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.530 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updated inventory for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.531 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.532 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating inventory in ProviderTree for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.551 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.552 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:05:15 compute-0 nova_compute[189608]: 2025-11-24 22:05:15.552 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:17 compute-0 nova_compute[189608]: 2025-11-24 22:05:17.297 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:17 compute-0 nova_compute[189608]: 2025-11-24 22:05:17.298 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:17 compute-0 nova_compute[189608]: 2025-11-24 22:05:17.299 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:17 compute-0 nova_compute[189608]: 2025-11-24 22:05:17.299 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:05:17 compute-0 nova_compute[189608]: 2025-11-24 22:05:17.300 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:05:18 compute-0 nova_compute[189608]: 2025-11-24 22:05:18.064 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:18 compute-0 nova_compute[189608]: 2025-11-24 22:05:18.735 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:19 compute-0 sshd-session[240331]: Connection closed by authenticating user root 80.94.95.116 port 19200 [preauth]
Nov 24 22:05:20 compute-0 podman[240348]: 2025-11-24 22:05:20.544931488 +0000 UTC m=+0.088324386 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:05:20 compute-0 ovn_controller[97889]: 2025-11-24T22:05:20Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:21:ae 192.168.0.169
Nov 24 22:05:20 compute-0 ovn_controller[97889]: 2025-11-24T22:05:20Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:21:ae 192.168.0.169
Nov 24 22:05:23 compute-0 nova_compute[189608]: 2025-11-24 22:05:23.066 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:23 compute-0 nova_compute[189608]: 2025-11-24 22:05:23.738 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:27 compute-0 podman[240371]: 2025-11-24 22:05:27.511563198 +0000 UTC m=+0.073479873 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:05:28 compute-0 nova_compute[189608]: 2025-11-24 22:05:28.069 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:28 compute-0 nova_compute[189608]: 2025-11-24 22:05:28.741 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:29 compute-0 podman[203795]: time="2025-11-24T22:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:05:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:05:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4784 "" "Go-http-client/1.1"
Nov 24 22:05:31 compute-0 sshd-session[240389]: Invalid user sol from 45.148.10.240 port 44980
Nov 24 22:05:31 compute-0 sshd-session[240389]: Connection closed by invalid user sol 45.148.10.240 port 44980 [preauth]
Nov 24 22:05:31 compute-0 podman[240391]: 2025-11-24 22:05:31.174121247 +0000 UTC m=+0.106628106 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Nov 24 22:05:31 compute-0 openstack_network_exporter[205945]: ERROR   22:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:05:31 compute-0 openstack_network_exporter[205945]: ERROR   22:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:05:31 compute-0 openstack_network_exporter[205945]: ERROR   22:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:05:31 compute-0 openstack_network_exporter[205945]: ERROR   22:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:05:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:05:31 compute-0 openstack_network_exporter[205945]: ERROR   22:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:05:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:05:33 compute-0 nova_compute[189608]: 2025-11-24 22:05:33.072 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:33 compute-0 nova_compute[189608]: 2025-11-24 22:05:33.744 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:34 compute-0 ovn_controller[97889]: 2025-11-24T22:05:34Z|00034|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 24 22:05:36 compute-0 podman[240413]: 2025-11-24 22:05:36.548141964 +0000 UTC m=+0.103353097 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, release=1214.1726694543, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, name=ubi9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64)
Nov 24 22:05:36 compute-0 podman[240414]: 2025-11-24 22:05:36.552482915 +0000 UTC m=+0.104544043 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64)
Nov 24 22:05:36 compute-0 podman[240415]: 2025-11-24 22:05:36.587672429 +0000 UTC m=+0.129581279 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 22:05:38 compute-0 nova_compute[189608]: 2025-11-24 22:05:38.076 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:38 compute-0 nova_compute[189608]: 2025-11-24 22:05:38.748 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:41 compute-0 podman[240469]: 2025-11-24 22:05:41.541820696 +0000 UTC m=+0.103356636 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:05:43 compute-0 nova_compute[189608]: 2025-11-24 22:05:43.079 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:43 compute-0 nova_compute[189608]: 2025-11-24 22:05:43.751 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:44 compute-0 podman[240493]: 2025-11-24 22:05:44.557112559 +0000 UTC m=+0.096842890 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 22:05:44 compute-0 podman[240492]: 2025-11-24 22:05:44.590063635 +0000 UTC m=+0.133731385 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 22:05:47 compute-0 sshd-session[240535]: Invalid user solana from 193.32.162.145 port 43894
Nov 24 22:05:47 compute-0 sshd-session[240535]: Connection closed by invalid user solana 193.32.162.145 port 43894 [preauth]
Nov 24 22:05:48 compute-0 nova_compute[189608]: 2025-11-24 22:05:48.082 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:48 compute-0 nova_compute[189608]: 2025-11-24 22:05:48.755 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:51 compute-0 podman[240538]: 2025-11-24 22:05:51.499901601 +0000 UTC m=+0.062310826 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:05:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:05:52.160 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:05:52 compute-0 nova_compute[189608]: 2025-11-24 22:05:52.161 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:05:52.162 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:05:53 compute-0 nova_compute[189608]: 2025-11-24 22:05:53.084 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:53 compute-0 nova_compute[189608]: 2025-11-24 22:05:53.757 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:05:54.560 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:05:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:05:54.561 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:05:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:05:54.561 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:05:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:05:56.165 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.087 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:58 compute-0 podman[240562]: 2025-11-24 22:05:58.538821524 +0000 UTC m=+0.100567864 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.602 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.603 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.619 189613 DEBUG nova.compute.manager [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.687 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.687 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.696 189613 DEBUG nova.virt.hardware [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.697 189613 INFO nova.compute.claims [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.760 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.854 189613 DEBUG nova.compute.provider_tree [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.869 189613 DEBUG nova.scheduler.client.report [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.893 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.894 189613 DEBUG nova.compute.manager [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.940 189613 DEBUG nova.compute.manager [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.941 189613 DEBUG nova.network.neutron [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:05:58 compute-0 nova_compute[189608]: 2025-11-24 22:05:58.966 189613 INFO nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.021 189613 DEBUG nova.compute.manager [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.111 189613 DEBUG nova.compute.manager [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.112 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.113 189613 INFO nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Creating image(s)
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.115 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "/var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.115 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.117 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.142 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.217 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.219 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.221 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.250 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.348 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.350 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed,backing_fmt=raw /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.405 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed,backing_fmt=raw /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk 1073741824" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.407 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.409 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.484 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.485 189613 DEBUG nova.virt.disk.api [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Checking if we can resize image /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.486 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.580 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.582 189613 DEBUG nova.virt.disk.api [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Cannot resize image /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.583 189613 DEBUG nova.objects.instance [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'migration_context' on Instance uuid 828f9a8f-602f-4ad5-a0b0-5a48a328d20e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.600 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "/var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.601 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.602 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.614 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.669 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.670 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.671 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.682 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.737 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.739 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:05:59 compute-0 podman[203795]: time="2025-11-24T22:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:05:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:05:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4788 "" "Go-http-client/1.1"
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.784 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.786 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.786 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.847 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.849 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.850 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Ensure instance console log exists: /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.851 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.852 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:05:59 compute-0 nova_compute[189608]: 2025-11-24 22:05:59.853 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:06:01 compute-0 openstack_network_exporter[205945]: ERROR   22:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:06:01 compute-0 openstack_network_exporter[205945]: ERROR   22:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:06:01 compute-0 openstack_network_exporter[205945]: ERROR   22:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:06:01 compute-0 openstack_network_exporter[205945]: ERROR   22:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:06:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:06:01 compute-0 openstack_network_exporter[205945]: ERROR   22:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:06:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:06:01 compute-0 podman[240609]: 2025-11-24 22:06:01.541956517 +0000 UTC m=+0.098010275 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 22:06:02 compute-0 nova_compute[189608]: 2025-11-24 22:06:02.396 189613 DEBUG nova.network.neutron [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Successfully updated port: 3223b8cb-74bd-4db9-8dd2-441f7c81c71c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:06:02 compute-0 nova_compute[189608]: 2025-11-24 22:06:02.416 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:06:02 compute-0 nova_compute[189608]: 2025-11-24 22:06:02.417 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquired lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:06:02 compute-0 nova_compute[189608]: 2025-11-24 22:06:02.418 189613 DEBUG nova.network.neutron [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:06:02 compute-0 nova_compute[189608]: 2025-11-24 22:06:02.494 189613 DEBUG nova.compute.manager [req-b99f8424-08a2-41df-b945-d4495981dbb5 req-76490f24-1634-4c65-bded-e550cff3d10a c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Received event network-changed-3223b8cb-74bd-4db9-8dd2-441f7c81c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:06:02 compute-0 nova_compute[189608]: 2025-11-24 22:06:02.495 189613 DEBUG nova.compute.manager [req-b99f8424-08a2-41df-b945-d4495981dbb5 req-76490f24-1634-4c65-bded-e550cff3d10a c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Refreshing instance network info cache due to event network-changed-3223b8cb-74bd-4db9-8dd2-441f7c81c71c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:06:02 compute-0 nova_compute[189608]: 2025-11-24 22:06:02.495 189613 DEBUG oslo_concurrency.lockutils [req-b99f8424-08a2-41df-b945-d4495981dbb5 req-76490f24-1634-4c65-bded-e550cff3d10a c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:06:02 compute-0 nova_compute[189608]: 2025-11-24 22:06:02.575 189613 DEBUG nova.network.neutron [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.091 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:03 compute-0 sshd-session[240627]: Invalid user support from 78.128.112.74 port 39408
Nov 24 22:06:03 compute-0 sshd-session[240627]: Connection closed by invalid user support 78.128.112.74 port 39408 [preauth]
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.672 189613 DEBUG nova.network.neutron [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updating instance_info_cache with network_info: [{"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.690 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Releasing lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.690 189613 DEBUG nova.compute.manager [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Instance network_info: |[{"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.691 189613 DEBUG oslo_concurrency.lockutils [req-b99f8424-08a2-41df-b945-d4495981dbb5 req-76490f24-1634-4c65-bded-e550cff3d10a c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.691 189613 DEBUG nova.network.neutron [req-b99f8424-08a2-41df-b945-d4495981dbb5 req-76490f24-1634-4c65-bded-e550cff3d10a c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Refreshing network info cache for port 3223b8cb-74bd-4db9-8dd2-441f7c81c71c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.696 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Start _get_guest_xml network_info=[{"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-24T22:03:25Z,direct_url=<?>,disk_format='qcow2',id=a63b9561-12dc-4c11-858f-aa6fafbed036,min_disk=0,min_ram=0,name='cirros',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-24T22:03:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}], 'ephemerals': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 1, 'encryption_format': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.712 189613 WARNING nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.720 189613 DEBUG nova.virt.libvirt.host [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.721 189613 DEBUG nova.virt.libvirt.host [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.727 189613 DEBUG nova.virt.libvirt.host [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.728 189613 DEBUG nova.virt.libvirt.host [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.728 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.729 189613 DEBUG nova.virt.hardware [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:03:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-24T22:03:25Z,direct_url=<?>,disk_format='qcow2',id=a63b9561-12dc-4c11-858f-aa6fafbed036,min_disk=0,min_ram=0,name='cirros',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-24T22:03:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.729 189613 DEBUG nova.virt.hardware [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.729 189613 DEBUG nova.virt.hardware [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.730 189613 DEBUG nova.virt.hardware [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.730 189613 DEBUG nova.virt.hardware [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.730 189613 DEBUG nova.virt.hardware [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.730 189613 DEBUG nova.virt.hardware [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.731 189613 DEBUG nova.virt.hardware [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.731 189613 DEBUG nova.virt.hardware [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.731 189613 DEBUG nova.virt.hardware [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.731 189613 DEBUG nova.virt.hardware [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.736 189613 DEBUG nova.virt.libvirt.vif [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:05:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk',id=2,image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b438824c-ce52-4539-9db6-355e0ca018db'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='309342b7e3e849b2a5dd56651d8fa068',ramdisk_id='',reservation_id='r-51rhhz0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:05:59Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MTg4NjY0NzkwNjE2OTk0NzgzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgxODg2NjQ3OTA2MTY5OTQ3ODM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODE4ODY2NDc5MDYxNjk5NDc4Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgxODg2NjQ3OTA2MTY5OTQ3ODM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MTg4NjY0NzkwNjE2OTk0NzgzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MTg4NjY0NzkwNjE2OTk0NzgzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Nov 24 22:06:03 compute-0 nova_compute[189608]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODE4ODY2NDc5MDYxNjk5NDc4Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgxODg2NjQ3OTA2MTY5OTQ3ODM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MTg4NjY0NzkwNjE2OTk0NzgzPT0tLQo=',user_id='572aaac113f54af8a894707849aed6bf',uuid=828f9a8f-602f-4ad5-a0b0-5a48a328d20e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.736 189613 DEBUG nova.network.os_vif_util [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converting VIF {"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.737 189613 DEBUG nova.network.os_vif_util [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:40:c3,bridge_name='br-int',has_traffic_filtering=True,id=3223b8cb-74bd-4db9-8dd2-441f7c81c71c,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3223b8cb-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.738 189613 DEBUG nova.objects.instance [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'pci_devices' on Instance uuid 828f9a8f-602f-4ad5-a0b0-5a48a328d20e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.758 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:06:03 compute-0 nova_compute[189608]:   <uuid>828f9a8f-602f-4ad5-a0b0-5a48a328d20e</uuid>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   <name>instance-00000002</name>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   <memory>524288</memory>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <nova:name>vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk</nova:name>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:06:03</nova:creationTime>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <nova:flavor name="m1.small">
Nov 24 22:06:03 compute-0 nova_compute[189608]:         <nova:memory>512</nova:memory>
Nov 24 22:06:03 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:06:03 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:06:03 compute-0 nova_compute[189608]:         <nova:ephemeral>1</nova:ephemeral>
Nov 24 22:06:03 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:06:03 compute-0 nova_compute[189608]:         <nova:user uuid="572aaac113f54af8a894707849aed6bf">admin</nova:user>
Nov 24 22:06:03 compute-0 nova_compute[189608]:         <nova:project uuid="309342b7e3e849b2a5dd56651d8fa068">admin</nova:project>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="a63b9561-12dc-4c11-858f-aa6fafbed036"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:06:03 compute-0 nova_compute[189608]:         <nova:port uuid="3223b8cb-74bd-4db9-8dd2-441f7c81c71c">
Nov 24 22:06:03 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="192.168.0.166" ipVersion="4"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <system>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <entry name="serial">828f9a8f-602f-4ad5-a0b0-5a48a328d20e</entry>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <entry name="uuid">828f9a8f-602f-4ad5-a0b0-5a48a328d20e</entry>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     </system>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   <os>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   </os>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   <features>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   </features>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <target dev="vdb" bus="virtio"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.config"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:3e:40:c3"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <target dev="tap3223b8cb-74"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/console.log" append="off"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <video>
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     </video>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:06:03 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:06:03 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:06:03 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:06:03 compute-0 nova_compute[189608]: </domain>
Nov 24 22:06:03 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.758 189613 DEBUG nova.compute.manager [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Preparing to wait for external event network-vif-plugged-3223b8cb-74bd-4db9-8dd2-441f7c81c71c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.758 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.759 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.759 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.759 189613 DEBUG nova.virt.libvirt.vif [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:05:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk',id=2,image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b438824c-ce52-4539-9db6-355e0ca018db'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='309342b7e3e849b2a5dd56651d8fa068',ramdisk_id='',reservation_id='r-51rhhz0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:05:59Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MTg4NjY0NzkwNjE2OTk0NzgzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgxODg2NjQ3OTA2MTY5OTQ3ODM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODE4ODY2NDc5MDYxNjk5NDc4Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgxODg2NjQ3OTA2MTY5OTQ3ODM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MTg4NjY0NzkwNjE2OTk0NzgzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MTg4NjY0NzkwNjE2OTk0NzgzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Nov 24 22:06:03 compute-0 nova_compute[189608]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODE4ODY2NDc5MDYxNjk5NDc4Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgxODg2NjQ3OTA2MTY5OTQ3ODM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MTg4NjY0NzkwNjE2OTk0NzgzPT0tLQo=',user_id='572aaac113f54af8a894707849aed6bf',uuid=828f9a8f-602f-4ad5-a0b0-5a48a328d20e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.759 189613 DEBUG nova.network.os_vif_util [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converting VIF {"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.760 189613 DEBUG nova.network.os_vif_util [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:40:c3,bridge_name='br-int',has_traffic_filtering=True,id=3223b8cb-74bd-4db9-8dd2-441f7c81c71c,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3223b8cb-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.760 189613 DEBUG os_vif [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:40:c3,bridge_name='br-int',has_traffic_filtering=True,id=3223b8cb-74bd-4db9-8dd2-441f7c81c71c,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3223b8cb-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.761 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.761 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.761 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.763 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.765 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.765 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3223b8cb-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.766 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3223b8cb-74, col_values=(('external_ids', {'iface-id': '3223b8cb-74bd-4db9-8dd2-441f7c81c71c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:40:c3', 'vm-uuid': '828f9a8f-602f-4ad5-a0b0-5a48a328d20e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:06:03 compute-0 NetworkManager[56413]: <info>  [1764021963.7682] manager: (tap3223b8cb-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.769 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.775 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.776 189613 INFO os_vif [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:40:c3,bridge_name='br-int',has_traffic_filtering=True,id=3223b8cb-74bd-4db9-8dd2-441f7c81c71c,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3223b8cb-74')
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.829 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.829 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.830 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.830 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No VIF found with MAC fa:16:3e:3e:40:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:06:03 compute-0 nova_compute[189608]: 2025-11-24 22:06:03.830 189613 INFO nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Using config drive
Nov 24 22:06:04 compute-0 rsyslogd[237036]: message too long (8192) with configured size 8096, begin of message is: 2025-11-24 22:06:03.736 189613 DEBUG nova.virt.libvirt.vif [None req-01078940-47 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 24 22:06:04 compute-0 rsyslogd[237036]: message too long (8192) with configured size 8096, begin of message is: 2025-11-24 22:06:03.759 189613 DEBUG nova.virt.libvirt.vif [None req-01078940-47 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 24 22:06:04 compute-0 nova_compute[189608]: 2025-11-24 22:06:04.284 189613 INFO nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Creating config drive at /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.config
Nov 24 22:06:04 compute-0 nova_compute[189608]: 2025-11-24 22:06:04.293 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq7tao2vt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:06:04 compute-0 nova_compute[189608]: 2025-11-24 22:06:04.437 189613 DEBUG oslo_concurrency.processutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq7tao2vt" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:06:04 compute-0 kernel: tap3223b8cb-74: entered promiscuous mode
Nov 24 22:06:04 compute-0 NetworkManager[56413]: <info>  [1764021964.5364] manager: (tap3223b8cb-74): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Nov 24 22:06:04 compute-0 nova_compute[189608]: 2025-11-24 22:06:04.536 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:04 compute-0 ovn_controller[97889]: 2025-11-24T22:06:04Z|00035|binding|INFO|Claiming lport 3223b8cb-74bd-4db9-8dd2-441f7c81c71c for this chassis.
Nov 24 22:06:04 compute-0 ovn_controller[97889]: 2025-11-24T22:06:04Z|00036|binding|INFO|3223b8cb-74bd-4db9-8dd2-441f7c81c71c: Claiming fa:16:3e:3e:40:c3 192.168.0.166
Nov 24 22:06:04 compute-0 nova_compute[189608]: 2025-11-24 22:06:04.542 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.555 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:40:c3 192.168.0.166'], port_security=['fa:16:3e:3e:40:c3 192.168.0.166'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-mikdi7lqlsa5-5i26uioiiugb-edy6vcflj352-port-dk6alb7qkgmn', 'neutron:cidrs': '192.168.0.166/24', 'neutron:device_id': '828f9a8f-602f-4ad5-a0b0-5a48a328d20e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-mikdi7lqlsa5-5i26uioiiugb-edy6vcflj352-port-dk6alb7qkgmn', 'neutron:project_id': '309342b7e3e849b2a5dd56651d8fa068', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c15160ba-5e91-4c5e-8c90-4266712c07a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b82cef4e-b720-4309-9703-51bb202fedba, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=3223b8cb-74bd-4db9-8dd2-441f7c81c71c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.557 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 3223b8cb-74bd-4db9-8dd2-441f7c81c71c in datapath 1d1b3625-954d-4d8b-8b3f-323c25d9b42a bound to our chassis
Nov 24 22:06:04 compute-0 nova_compute[189608]: 2025-11-24 22:06:04.558 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.560 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d1b3625-954d-4d8b-8b3f-323c25d9b42a
Nov 24 22:06:04 compute-0 ovn_controller[97889]: 2025-11-24T22:06:04Z|00037|binding|INFO|Setting lport 3223b8cb-74bd-4db9-8dd2-441f7c81c71c ovn-installed in OVS
Nov 24 22:06:04 compute-0 ovn_controller[97889]: 2025-11-24T22:06:04Z|00038|binding|INFO|Setting lport 3223b8cb-74bd-4db9-8dd2-441f7c81c71c up in Southbound
Nov 24 22:06:04 compute-0 nova_compute[189608]: 2025-11-24 22:06:04.569 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.591 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[984f5868-1c08-4c0f-8ad8-416597242d81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:06:04 compute-0 systemd-udevd[240651]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:06:04 compute-0 systemd-machined[155884]: New machine qemu-2-instance-00000002.
Nov 24 22:06:04 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 24 22:06:04 compute-0 NetworkManager[56413]: <info>  [1764021964.6141] device (tap3223b8cb-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:06:04 compute-0 NetworkManager[56413]: <info>  [1764021964.6152] device (tap3223b8cb-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.634 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[33b62578-169c-45e1-955a-53d4204ceb0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.639 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[6483022a-6406-4ab2-a188-009ab52a5520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.672 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[375a79b4-f8e4-47af-8aff-d70cd81a8d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.691 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[367356c9-d31c-46ba-8ef2-d7ab1ae35c26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d1b3625-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:e7:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373788, 'reachable_time': 19807, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240663, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.709 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5feab4-7c12-4cdb-a744-e837f66a8429]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1d1b3625-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373805, 'tstamp': 373805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240665, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap1d1b3625-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373810, 'tstamp': 373810}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240665, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.711 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d1b3625-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:06:04 compute-0 nova_compute[189608]: 2025-11-24 22:06:04.713 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.714 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d1b3625-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.714 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.715 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d1b3625-90, col_values=(('external_ids', {'iface-id': '13073b7d-8165-42cd-87f4-fb1eb15a5b94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:06:04 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:04.715 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.086 189613 DEBUG nova.compute.manager [req-05d8956b-7e50-4e84-a6d8-56455d57842b req-740eca16-609f-4a38-9f57-992bf94b500e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Received event network-vif-plugged-3223b8cb-74bd-4db9-8dd2-441f7c81c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.087 189613 DEBUG oslo_concurrency.lockutils [req-05d8956b-7e50-4e84-a6d8-56455d57842b req-740eca16-609f-4a38-9f57-992bf94b500e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.087 189613 DEBUG oslo_concurrency.lockutils [req-05d8956b-7e50-4e84-a6d8-56455d57842b req-740eca16-609f-4a38-9f57-992bf94b500e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.087 189613 DEBUG oslo_concurrency.lockutils [req-05d8956b-7e50-4e84-a6d8-56455d57842b req-740eca16-609f-4a38-9f57-992bf94b500e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.087 189613 DEBUG nova.compute.manager [req-05d8956b-7e50-4e84-a6d8-56455d57842b req-740eca16-609f-4a38-9f57-992bf94b500e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Processing event network-vif-plugged-3223b8cb-74bd-4db9-8dd2-441f7c81c71c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.165 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764021965.1645513, 828f9a8f-602f-4ad5-a0b0-5a48a328d20e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.166 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] VM Started (Lifecycle Event)
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.170 189613 DEBUG nova.compute.manager [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.179 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.186 189613 INFO nova.virt.libvirt.driver [-] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Instance spawned successfully.
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.188 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.191 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.212 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.222 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.223 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.224 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.225 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.225 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.226 189613 DEBUG nova.virt.libvirt.driver [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.234 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.234 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764021965.1647177, 828f9a8f-602f-4ad5-a0b0-5a48a328d20e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.235 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] VM Paused (Lifecycle Event)
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.260 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.266 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764021965.1768413, 828f9a8f-602f-4ad5-a0b0-5a48a328d20e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.266 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] VM Resumed (Lifecycle Event)
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.281 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.286 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.293 189613 INFO nova.compute.manager [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Took 6.18 seconds to spawn the instance on the hypervisor.
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.293 189613 DEBUG nova.compute.manager [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.303 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.358 189613 INFO nova.compute.manager [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Took 6.70 seconds to build instance.
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.383 189613 DEBUG oslo_concurrency.lockutils [None req-01078940-478f-4b2f-a3ce-8580e2888d46 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.979 189613 DEBUG nova.network.neutron [req-b99f8424-08a2-41df-b945-d4495981dbb5 req-76490f24-1634-4c65-bded-e550cff3d10a c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updated VIF entry in instance network info cache for port 3223b8cb-74bd-4db9-8dd2-441f7c81c71c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.980 189613 DEBUG nova.network.neutron [req-b99f8424-08a2-41df-b945-d4495981dbb5 req-76490f24-1634-4c65-bded-e550cff3d10a c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updating instance_info_cache with network_info: [{"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:06:05 compute-0 nova_compute[189608]: 2025-11-24 22:06:05.996 189613 DEBUG oslo_concurrency.lockutils [req-b99f8424-08a2-41df-b945-d4495981dbb5 req-76490f24-1634-4c65-bded-e550cff3d10a c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:06:07 compute-0 nova_compute[189608]: 2025-11-24 22:06:07.277 189613 DEBUG nova.compute.manager [req-50808ff9-6d4e-490c-b450-ecf25dc53070 req-0176c64a-29de-4219-b624-a09d7e768520 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Received event network-vif-plugged-3223b8cb-74bd-4db9-8dd2-441f7c81c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:06:07 compute-0 nova_compute[189608]: 2025-11-24 22:06:07.277 189613 DEBUG oslo_concurrency.lockutils [req-50808ff9-6d4e-490c-b450-ecf25dc53070 req-0176c64a-29de-4219-b624-a09d7e768520 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:06:07 compute-0 nova_compute[189608]: 2025-11-24 22:06:07.278 189613 DEBUG oslo_concurrency.lockutils [req-50808ff9-6d4e-490c-b450-ecf25dc53070 req-0176c64a-29de-4219-b624-a09d7e768520 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:06:07 compute-0 nova_compute[189608]: 2025-11-24 22:06:07.278 189613 DEBUG oslo_concurrency.lockutils [req-50808ff9-6d4e-490c-b450-ecf25dc53070 req-0176c64a-29de-4219-b624-a09d7e768520 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:06:07 compute-0 nova_compute[189608]: 2025-11-24 22:06:07.279 189613 DEBUG nova.compute.manager [req-50808ff9-6d4e-490c-b450-ecf25dc53070 req-0176c64a-29de-4219-b624-a09d7e768520 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] No waiting events found dispatching network-vif-plugged-3223b8cb-74bd-4db9-8dd2-441f7c81c71c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:06:07 compute-0 nova_compute[189608]: 2025-11-24 22:06:07.279 189613 WARNING nova.compute.manager [req-50808ff9-6d4e-490c-b450-ecf25dc53070 req-0176c64a-29de-4219-b624-a09d7e768520 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Received unexpected event network-vif-plugged-3223b8cb-74bd-4db9-8dd2-441f7c81c71c for instance with vm_state active and task_state None.
Nov 24 22:06:07 compute-0 podman[240674]: 2025-11-24 22:06:07.539123273 +0000 UTC m=+0.090117597 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, architecture=x86_64, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Nov 24 22:06:07 compute-0 podman[240673]: 2025-11-24 22:06:07.546452154 +0000 UTC m=+0.096500090 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, version=9.4, container_name=kepler, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=base rhel9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Nov 24 22:06:07 compute-0 podman[240675]: 2025-11-24 22:06:07.550252199 +0000 UTC m=+0.084437324 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:06:08 compute-0 nova_compute[189608]: 2025-11-24 22:06:08.097 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:08 compute-0 nova_compute[189608]: 2025-11-24 22:06:08.768 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:10 compute-0 nova_compute[189608]: 2025-11-24 22:06:10.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:06:10 compute-0 nova_compute[189608]: 2025-11-24 22:06:10.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:06:10 compute-0 nova_compute[189608]: 2025-11-24 22:06:10.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:06:10 compute-0 nova_compute[189608]: 2025-11-24 22:06:10.951 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:06:10 compute-0 nova_compute[189608]: 2025-11-24 22:06:10.952 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:06:10 compute-0 nova_compute[189608]: 2025-11-24 22:06:10.952 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:06:10 compute-0 nova_compute[189608]: 2025-11-24 22:06:10.952 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:06:12 compute-0 podman[240729]: 2025-11-24 22:06:12.535839921 +0000 UTC m=+0.093886461 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.100 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.115 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updating instance_info_cache with network_info: [{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.129 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.130 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.130 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.131 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.165 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.166 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.166 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.167 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.251 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.339 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.340 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.409 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.410 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.467 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.468 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.546 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.553 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.646 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.647 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.729 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.730 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.771 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.827 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.828 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:06:13 compute-0 nova_compute[189608]: 2025-11-24 22:06:13.896 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:06:14 compute-0 nova_compute[189608]: 2025-11-24 22:06:14.286 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:06:14 compute-0 nova_compute[189608]: 2025-11-24 22:06:14.288 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5119MB free_disk=72.20843887329102GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:06:14 compute-0 nova_compute[189608]: 2025-11-24 22:06:14.288 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:06:14 compute-0 nova_compute[189608]: 2025-11-24 22:06:14.289 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:06:14 compute-0 nova_compute[189608]: 2025-11-24 22:06:14.364 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:06:14 compute-0 nova_compute[189608]: 2025-11-24 22:06:14.365 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:06:14 compute-0 nova_compute[189608]: 2025-11-24 22:06:14.365 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:06:14 compute-0 nova_compute[189608]: 2025-11-24 22:06:14.366 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:06:14 compute-0 nova_compute[189608]: 2025-11-24 22:06:14.451 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:06:14 compute-0 nova_compute[189608]: 2025-11-24 22:06:14.463 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:06:14 compute-0 nova_compute[189608]: 2025-11-24 22:06:14.495 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:06:14 compute-0 nova_compute[189608]: 2025-11-24 22:06:14.496 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:06:14 compute-0 podman[240778]: 2025-11-24 22:06:14.752666876 +0000 UTC m=+0.063336396 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 22:06:14 compute-0 podman[240777]: 2025-11-24 22:06:14.795188602 +0000 UTC m=+0.113557066 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:06:15 compute-0 nova_compute[189608]: 2025-11-24 22:06:15.159 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:06:15 compute-0 nova_compute[189608]: 2025-11-24 22:06:15.160 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:06:15 compute-0 nova_compute[189608]: 2025-11-24 22:06:15.161 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:06:15 compute-0 nova_compute[189608]: 2025-11-24 22:06:15.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:06:15 compute-0 nova_compute[189608]: 2025-11-24 22:06:15.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:06:15 compute-0 nova_compute[189608]: 2025-11-24 22:06:15.795 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:06:16 compute-0 nova_compute[189608]: 2025-11-24 22:06:16.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.619 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.620 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:06:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:17.625 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance ea741b45-c6b4-41c0-a70f-c752b616faa2 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 24 22:06:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:18.028 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/ea741b45-c6b4-41c0-a70f-c752b616faa2 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}64060fdce0de69c84bc4ec1fbe9dbdfbc4dd57ffec34862b3445d9841d3967a2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 24 22:06:18 compute-0 nova_compute[189608]: 2025-11-24 22:06:18.103 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:18.591 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Mon, 24 Nov 2025 22:06:18 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-6fd26951-3d70-4660-ac2a-91eaaef19a14 x-openstack-request-id: req-6fd26951-3d70-4660-ac2a-91eaaef19a14 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 24 22:06:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:18.592 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "ea741b45-c6b4-41c0-a70f-c752b616faa2", "name": "test_0", "status": "ACTIVE", "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "user_id": "572aaac113f54af8a894707849aed6bf", "metadata": {}, "hostId": "138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a", "image": {"id": "a63b9561-12dc-4c11-858f-aa6fafbed036", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/a63b9561-12dc-4c11-858f-aa6fafbed036"}]}, "flavor": {"id": "cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b"}]}, "created": "2025-11-24T22:04:33Z", "updated": "2025-11-24T22:04:47Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.169", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:85:21:ae"}, {"version": 4, "addr": "192.168.122.190", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:85:21:ae"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/ea741b45-c6b4-41c0-a70f-c752b616faa2"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/ea741b45-c6b4-41c0-a70f-c752b616faa2"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-24T22:04:47.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 24 22:06:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:18.592 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/ea741b45-c6b4-41c0-a70f-c752b616faa2 used request id req-6fd26951-3d70-4660-ac2a-91eaaef19a14 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 24 22:06:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:18.595 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ea741b45-c6b4-41c0-a70f-c752b616faa2', 'name': 'test_0', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:06:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:18.600 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 24 22:06:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:18.601 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/828f9a8f-602f-4ad5-a0b0-5a48a328d20e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}64060fdce0de69c84bc4ec1fbe9dbdfbc4dd57ffec34862b3445d9841d3967a2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 24 22:06:18 compute-0 nova_compute[189608]: 2025-11-24 22:06:18.777 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.252 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Mon, 24 Nov 2025 22:06:18 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-6eb89bb4-a9ff-4665-9834-8b4af0720b15 x-openstack-request-id: req-6eb89bb4-a9ff-4665-9834-8b4af0720b15 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.252 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "828f9a8f-602f-4ad5-a0b0-5a48a328d20e", "name": "vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk", "status": "ACTIVE", "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "user_id": "572aaac113f54af8a894707849aed6bf", "metadata": {"metering.server_group": "b438824c-ce52-4539-9db6-355e0ca018db"}, "hostId": "138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a", "image": {"id": "a63b9561-12dc-4c11-858f-aa6fafbed036", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/a63b9561-12dc-4c11-858f-aa6fafbed036"}]}, "flavor": {"id": "cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b"}]}, "created": "2025-11-24T22:05:57Z", "updated": "2025-11-24T22:06:05Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.166", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3e:40:c3"}, {"version": 4, "addr": "192.168.122.244", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3e:40:c3"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/828f9a8f-602f-4ad5-a0b0-5a48a328d20e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/828f9a8f-602f-4ad5-a0b0-5a48a328d20e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-24T22:06:05.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.253 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/828f9a8f-602f-4ad5-a0b0-5a48a328d20e used request id req-6eb89bb4-a9ff-4665-9834-8b4af0720b15 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.255 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '828f9a8f-602f-4ad5-a0b0-5a48a328d20e', 'name': 'vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.255 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.255 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.256 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.257 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:06:19.256481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.262 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for ea741b45-c6b4-41c0-a70f-c752b616faa2 / tap5430cfcb-55 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.263 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.269 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 828f9a8f-602f-4ad5-a0b0-5a48a328d20e / tap3223b8cb-74 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.269 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.271 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.272 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.273 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.273 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.274 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:06:19.273287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.276 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.276 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.277 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.277 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.277 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:06:19.277413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.321 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.322 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.322 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.370 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.372 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.372 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.373 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.374 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:06:19.374864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.491 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.492 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.493 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.605 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.607 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.608 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.609 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.609 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.610 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.610 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.610 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.611 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.611 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 667223647 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.611 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 118019506 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.612 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 84934831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.613 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 948050322 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.614 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.614 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 3665121 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:06:19.611049) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.615 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.616 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.616 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.617 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:06:19.616930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.652 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/cpu volume: 33960000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.677 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/cpu volume: 14060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.679 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.679 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.680 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.680 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:06:19.678849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.681 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.681 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.681 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.682 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.682 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.683 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.683 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.683 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.683 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:06:19.683151) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.684 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.684 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.685 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.685 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.686 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.686 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.686 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.687 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.687 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.687 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.687 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.687 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.688 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:06:19.687323) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.689 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.689 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.689 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.690 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.690 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.690 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.691 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.691 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.691 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.692 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 2026985911 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:06:19.691619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.692 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 25037348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.693 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.693 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.693 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.694 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.695 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.695 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.695 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.695 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.695 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.696 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:06:19.696082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.696 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.696 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.697 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.697 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.698 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.699 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.700 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.700 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.700 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.700 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.701 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.701 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:06:19.700902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.701 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.701 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.702 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.702 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.702 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.702 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.702 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:06:19.702386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.702 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.703 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.703 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.703 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.704 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:06:19.703740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.704 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.704 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-24T22:06:19.704844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.705 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.705 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk>]
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.705 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.706 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.707 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:06:19.706187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.707 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.707 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.707 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.707 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.707 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:06:19.707318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.707 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.707 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.708 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.708 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.708 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.708 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:06:19.708693) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.709 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.709 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.709 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.710 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:06:19.710035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.710 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.711 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.711 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes volume: 2062 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.711 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:06:19.711527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.712 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.712 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.713 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.713 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:06:19.712930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.713 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.714 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.714 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.714 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:06:19.714556) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.715 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.715 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e: ceilometer.compute.pollsters.NoVolumeException
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.715 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.715 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.715 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.716 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-24T22:06:19.715873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.716 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk>]
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.716 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.717 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.718 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.719 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.719 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:06:19.716952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.720 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:06:19.719468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.720 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.720 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.720 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.720 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.721 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.721 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.722 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:06:19.721945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.722 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.722 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:06:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:06:22 compute-0 podman[240822]: 2025-11-24 22:06:22.577323972 +0000 UTC m=+0.122708912 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:06:23 compute-0 nova_compute[189608]: 2025-11-24 22:06:23.108 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:23 compute-0 nova_compute[189608]: 2025-11-24 22:06:23.782 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:28 compute-0 nova_compute[189608]: 2025-11-24 22:06:28.110 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:28 compute-0 nova_compute[189608]: 2025-11-24 22:06:28.787 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:29 compute-0 podman[240846]: 2025-11-24 22:06:29.545882894 +0000 UTC m=+0.093384865 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 24 22:06:29 compute-0 podman[203795]: time="2025-11-24T22:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:06:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:06:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4791 "" "Go-http-client/1.1"
Nov 24 22:06:31 compute-0 openstack_network_exporter[205945]: ERROR   22:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:06:31 compute-0 openstack_network_exporter[205945]: ERROR   22:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:06:31 compute-0 openstack_network_exporter[205945]: ERROR   22:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:06:31 compute-0 openstack_network_exporter[205945]: ERROR   22:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:06:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:06:31 compute-0 openstack_network_exporter[205945]: ERROR   22:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:06:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:06:32 compute-0 podman[240866]: 2025-11-24 22:06:32.591428263 +0000 UTC m=+0.144309295 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, io.buildah.version=1.41.4)
Nov 24 22:06:33 compute-0 nova_compute[189608]: 2025-11-24 22:06:33.113 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:33 compute-0 nova_compute[189608]: 2025-11-24 22:06:33.790 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:34 compute-0 ovn_controller[97889]: 2025-11-24T22:06:34Z|00039|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Nov 24 22:06:38 compute-0 nova_compute[189608]: 2025-11-24 22:06:38.117 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:38 compute-0 podman[240886]: 2025-11-24 22:06:38.587804845 +0000 UTC m=+0.143168341 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release-0.7.12=, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, com.redhat.component=ubi9-container)
Nov 24 22:06:38 compute-0 podman[240888]: 2025-11-24 22:06:38.598153107 +0000 UTC m=+0.137294133 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Nov 24 22:06:38 compute-0 podman[240887]: 2025-11-24 22:06:38.615169752 +0000 UTC m=+0.147208053 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 24 22:06:38 compute-0 nova_compute[189608]: 2025-11-24 22:06:38.794 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:39 compute-0 ovn_controller[97889]: 2025-11-24T22:06:39Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:40:c3 192.168.0.166
Nov 24 22:06:39 compute-0 ovn_controller[97889]: 2025-11-24T22:06:39Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:40:c3 192.168.0.166
Nov 24 22:06:43 compute-0 nova_compute[189608]: 2025-11-24 22:06:43.119 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:43 compute-0 podman[240949]: 2025-11-24 22:06:43.533111808 +0000 UTC m=+0.097210592 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:06:43 compute-0 nova_compute[189608]: 2025-11-24 22:06:43.797 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:45 compute-0 podman[240973]: 2025-11-24 22:06:45.578766225 +0000 UTC m=+0.130897780 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 24 22:06:45 compute-0 podman[240972]: 2025-11-24 22:06:45.617697202 +0000 UTC m=+0.165264409 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 22:06:48 compute-0 nova_compute[189608]: 2025-11-24 22:06:48.122 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:48 compute-0 nova_compute[189608]: 2025-11-24 22:06:48.800 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:53 compute-0 nova_compute[189608]: 2025-11-24 22:06:53.126 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:53 compute-0 podman[241014]: 2025-11-24 22:06:53.586003152 +0000 UTC m=+0.135064586 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:06:53 compute-0 nova_compute[189608]: 2025-11-24 22:06:53.803 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:54.561 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:06:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:54.562 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:06:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:06:54.562 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:06:58 compute-0 nova_compute[189608]: 2025-11-24 22:06:58.129 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:58 compute-0 nova_compute[189608]: 2025-11-24 22:06:58.806 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:06:59 compute-0 podman[203795]: time="2025-11-24T22:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:06:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:06:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Nov 24 22:07:00 compute-0 podman[241038]: 2025-11-24 22:07:00.541892222 +0000 UTC m=+0.091853069 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 24 22:07:01 compute-0 openstack_network_exporter[205945]: ERROR   22:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:07:01 compute-0 openstack_network_exporter[205945]: ERROR   22:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:07:01 compute-0 openstack_network_exporter[205945]: ERROR   22:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:07:01 compute-0 openstack_network_exporter[205945]: ERROR   22:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:07:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:07:01 compute-0 openstack_network_exporter[205945]: ERROR   22:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:07:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:07:03 compute-0 nova_compute[189608]: 2025-11-24 22:07:03.132 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:03 compute-0 podman[241058]: 2025-11-24 22:07:03.524620735 +0000 UTC m=+0.083154957 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 22:07:03 compute-0 nova_compute[189608]: 2025-11-24 22:07:03.810 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:06 compute-0 nova_compute[189608]: 2025-11-24 22:07:06.791 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:07:08 compute-0 nova_compute[189608]: 2025-11-24 22:07:08.135 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:08 compute-0 nova_compute[189608]: 2025-11-24 22:07:08.813 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:09 compute-0 podman[241079]: 2025-11-24 22:07:09.548607137 +0000 UTC m=+0.095792788 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, config_id=edpm, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=)
Nov 24 22:07:09 compute-0 podman[241080]: 2025-11-24 22:07:09.564726074 +0000 UTC m=+0.100520031 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc.)
Nov 24 22:07:09 compute-0 podman[241086]: 2025-11-24 22:07:09.597904657 +0000 UTC m=+0.114898696 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 24 22:07:10 compute-0 nova_compute[189608]: 2025-11-24 22:07:10.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:07:10 compute-0 nova_compute[189608]: 2025-11-24 22:07:10.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:07:11 compute-0 nova_compute[189608]: 2025-11-24 22:07:11.997 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:07:11 compute-0 nova_compute[189608]: 2025-11-24 22:07:11.998 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:07:11 compute-0 nova_compute[189608]: 2025-11-24 22:07:11.999 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:07:13 compute-0 nova_compute[189608]: 2025-11-24 22:07:13.138 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:13 compute-0 nova_compute[189608]: 2025-11-24 22:07:13.674 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updating instance_info_cache with network_info: [{"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:07:13 compute-0 nova_compute[189608]: 2025-11-24 22:07:13.709 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:07:13 compute-0 nova_compute[189608]: 2025-11-24 22:07:13.710 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:07:13 compute-0 nova_compute[189608]: 2025-11-24 22:07:13.711 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:07:13 compute-0 nova_compute[189608]: 2025-11-24 22:07:13.739 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:07:13 compute-0 nova_compute[189608]: 2025-11-24 22:07:13.740 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:07:13 compute-0 nova_compute[189608]: 2025-11-24 22:07:13.741 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:07:13 compute-0 nova_compute[189608]: 2025-11-24 22:07:13.742 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:07:13 compute-0 nova_compute[189608]: 2025-11-24 22:07:13.816 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:13 compute-0 nova_compute[189608]: 2025-11-24 22:07:13.842 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:07:13 compute-0 nova_compute[189608]: 2025-11-24 22:07:13.939 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:07:13 compute-0 nova_compute[189608]: 2025-11-24 22:07:13.940 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:07:14 compute-0 nova_compute[189608]: 2025-11-24 22:07:14.037 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:07:14 compute-0 nova_compute[189608]: 2025-11-24 22:07:14.038 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:07:14 compute-0 nova_compute[189608]: 2025-11-24 22:07:14.142 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:07:14 compute-0 nova_compute[189608]: 2025-11-24 22:07:14.143 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:07:14 compute-0 nova_compute[189608]: 2025-11-24 22:07:14.237 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:07:14 compute-0 nova_compute[189608]: 2025-11-24 22:07:14.251 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:07:14 compute-0 nova_compute[189608]: 2025-11-24 22:07:14.347 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:07:14 compute-0 nova_compute[189608]: 2025-11-24 22:07:14.348 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:07:14 compute-0 nova_compute[189608]: 2025-11-24 22:07:14.466 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:07:14 compute-0 nova_compute[189608]: 2025-11-24 22:07:14.467 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:07:14 compute-0 nova_compute[189608]: 2025-11-24 22:07:14.536 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:07:14 compute-0 nova_compute[189608]: 2025-11-24 22:07:14.538 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:07:14 compute-0 podman[241154]: 2025-11-24 22:07:14.56244585 +0000 UTC m=+0.103358217 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:07:14 compute-0 nova_compute[189608]: 2025-11-24 22:07:14.611 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:07:15 compute-0 nova_compute[189608]: 2025-11-24 22:07:15.186 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:07:15 compute-0 nova_compute[189608]: 2025-11-24 22:07:15.188 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5053MB free_disk=72.1863784790039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:07:15 compute-0 nova_compute[189608]: 2025-11-24 22:07:15.189 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:07:15 compute-0 nova_compute[189608]: 2025-11-24 22:07:15.190 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:07:15 compute-0 nova_compute[189608]: 2025-11-24 22:07:15.272 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:07:15 compute-0 nova_compute[189608]: 2025-11-24 22:07:15.273 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:07:15 compute-0 nova_compute[189608]: 2025-11-24 22:07:15.273 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:07:15 compute-0 nova_compute[189608]: 2025-11-24 22:07:15.274 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:07:15 compute-0 nova_compute[189608]: 2025-11-24 22:07:15.362 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:07:15 compute-0 nova_compute[189608]: 2025-11-24 22:07:15.376 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:07:15 compute-0 nova_compute[189608]: 2025-11-24 22:07:15.379 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:07:15 compute-0 nova_compute[189608]: 2025-11-24 22:07:15.380 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:07:16 compute-0 nova_compute[189608]: 2025-11-24 22:07:16.462 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:07:16 compute-0 nova_compute[189608]: 2025-11-24 22:07:16.463 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:07:16 compute-0 nova_compute[189608]: 2025-11-24 22:07:16.463 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:07:16 compute-0 nova_compute[189608]: 2025-11-24 22:07:16.464 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:07:16 compute-0 nova_compute[189608]: 2025-11-24 22:07:16.464 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:07:16 compute-0 podman[241186]: 2025-11-24 22:07:16.572067758 +0000 UTC m=+0.111737880 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 24 22:07:16 compute-0 podman[241185]: 2025-11-24 22:07:16.608996725 +0000 UTC m=+0.153489063 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:07:17 compute-0 nova_compute[189608]: 2025-11-24 22:07:17.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:07:17 compute-0 nova_compute[189608]: 2025-11-24 22:07:17.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:07:18 compute-0 nova_compute[189608]: 2025-11-24 22:07:18.140 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:18 compute-0 nova_compute[189608]: 2025-11-24 22:07:18.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:07:18 compute-0 nova_compute[189608]: 2025-11-24 22:07:18.819 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:23 compute-0 nova_compute[189608]: 2025-11-24 22:07:23.144 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:23 compute-0 nova_compute[189608]: 2025-11-24 22:07:23.821 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:24 compute-0 podman[241227]: 2025-11-24 22:07:24.576226741 +0000 UTC m=+0.114349049 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:07:27 compute-0 sshd-session[241250]: Invalid user solana from 45.148.10.240 port 33512
Nov 24 22:07:27 compute-0 sshd-session[241250]: Connection closed by invalid user solana 45.148.10.240 port 33512 [preauth]
Nov 24 22:07:28 compute-0 nova_compute[189608]: 2025-11-24 22:07:28.148 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:28 compute-0 nova_compute[189608]: 2025-11-24 22:07:28.824 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:29 compute-0 podman[203795]: time="2025-11-24T22:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:07:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:07:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4796 "" "Go-http-client/1.1"
Nov 24 22:07:31 compute-0 openstack_network_exporter[205945]: ERROR   22:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:07:31 compute-0 openstack_network_exporter[205945]: ERROR   22:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:07:31 compute-0 openstack_network_exporter[205945]: ERROR   22:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:07:31 compute-0 openstack_network_exporter[205945]: ERROR   22:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:07:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:07:31 compute-0 openstack_network_exporter[205945]: ERROR   22:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:07:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:07:31 compute-0 podman[241253]: 2025-11-24 22:07:31.571931886 +0000 UTC m=+0.128903249 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:07:33 compute-0 nova_compute[189608]: 2025-11-24 22:07:33.151 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:33 compute-0 nova_compute[189608]: 2025-11-24 22:07:33.826 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:34 compute-0 podman[241271]: 2025-11-24 22:07:34.587379751 +0000 UTC m=+0.127335382 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 22:07:38 compute-0 nova_compute[189608]: 2025-11-24 22:07:38.153 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:38 compute-0 nova_compute[189608]: 2025-11-24 22:07:38.829 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:40 compute-0 podman[241290]: 2025-11-24 22:07:40.609310367 +0000 UTC m=+0.147183120 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-type=git, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 24 22:07:40 compute-0 podman[241291]: 2025-11-24 22:07:40.616053504 +0000 UTC m=+0.142101335 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 24 22:07:40 compute-0 podman[241292]: 2025-11-24 22:07:40.618128707 +0000 UTC m=+0.136975687 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 22:07:43 compute-0 nova_compute[189608]: 2025-11-24 22:07:43.158 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:43 compute-0 nova_compute[189608]: 2025-11-24 22:07:43.838 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:44 compute-0 podman[241346]: 2025-11-24 22:07:44.845963242 +0000 UTC m=+0.132852392 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:07:47 compute-0 podman[241368]: 2025-11-24 22:07:47.545158031 +0000 UTC m=+0.085822921 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:07:47 compute-0 podman[241367]: 2025-11-24 22:07:47.624820481 +0000 UTC m=+0.157979721 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 22:07:48 compute-0 nova_compute[189608]: 2025-11-24 22:07:48.160 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:48 compute-0 nova_compute[189608]: 2025-11-24 22:07:48.843 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:53 compute-0 nova_compute[189608]: 2025-11-24 22:07:53.163 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:53 compute-0 nova_compute[189608]: 2025-11-24 22:07:53.846 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:07:54.562 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:07:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:07:54.563 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:07:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:07:54.564 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:07:55 compute-0 podman[241413]: 2025-11-24 22:07:55.570777702 +0000 UTC m=+0.132051407 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:07:58 compute-0 nova_compute[189608]: 2025-11-24 22:07:58.165 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:58 compute-0 nova_compute[189608]: 2025-11-24 22:07:58.849 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:07:59 compute-0 podman[203795]: time="2025-11-24T22:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:07:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:07:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 24 22:08:01 compute-0 openstack_network_exporter[205945]: ERROR   22:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:08:01 compute-0 openstack_network_exporter[205945]: ERROR   22:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:08:01 compute-0 openstack_network_exporter[205945]: ERROR   22:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:08:01 compute-0 openstack_network_exporter[205945]: ERROR   22:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:08:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:08:01 compute-0 openstack_network_exporter[205945]: ERROR   22:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:08:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:08:02 compute-0 podman[241438]: 2025-11-24 22:08:02.548953221 +0000 UTC m=+0.110001410 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 22:08:03 compute-0 nova_compute[189608]: 2025-11-24 22:08:03.169 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:03 compute-0 nova_compute[189608]: 2025-11-24 22:08:03.854 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:05 compute-0 podman[241456]: 2025-11-24 22:08:05.552790624 +0000 UTC m=+0.112823677 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 22:08:07 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 24 22:08:08 compute-0 nova_compute[189608]: 2025-11-24 22:08:08.172 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:08 compute-0 nova_compute[189608]: 2025-11-24 22:08:08.857 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:10 compute-0 nova_compute[189608]: 2025-11-24 22:08:10.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:08:10 compute-0 nova_compute[189608]: 2025-11-24 22:08:10.796 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:08:10 compute-0 nova_compute[189608]: 2025-11-24 22:08:10.797 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:08:11 compute-0 nova_compute[189608]: 2025-11-24 22:08:11.020 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:08:11 compute-0 nova_compute[189608]: 2025-11-24 22:08:11.022 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:08:11 compute-0 nova_compute[189608]: 2025-11-24 22:08:11.023 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:08:11 compute-0 nova_compute[189608]: 2025-11-24 22:08:11.024 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:08:11 compute-0 podman[241478]: 2025-11-24 22:08:11.546876843 +0000 UTC m=+0.100726787 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container)
Nov 24 22:08:11 compute-0 podman[241477]: 2025-11-24 22:08:11.550248436 +0000 UTC m=+0.106189324 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release-0.7.12=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 24 22:08:11 compute-0 podman[241479]: 2025-11-24 22:08:11.585153036 +0000 UTC m=+0.124176296 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 22:08:12 compute-0 nova_compute[189608]: 2025-11-24 22:08:12.464 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updating instance_info_cache with network_info: [{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:08:12 compute-0 nova_compute[189608]: 2025-11-24 22:08:12.487 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:08:12 compute-0 nova_compute[189608]: 2025-11-24 22:08:12.487 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:08:13 compute-0 nova_compute[189608]: 2025-11-24 22:08:13.174 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:13 compute-0 nova_compute[189608]: 2025-11-24 22:08:13.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:08:13 compute-0 nova_compute[189608]: 2025-11-24 22:08:13.858 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:08:13 compute-0 nova_compute[189608]: 2025-11-24 22:08:13.860 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:08:13 compute-0 nova_compute[189608]: 2025-11-24 22:08:13.861 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:08:13 compute-0 nova_compute[189608]: 2025-11-24 22:08:13.862 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:08:13 compute-0 nova_compute[189608]: 2025-11-24 22:08:13.864 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:13 compute-0 nova_compute[189608]: 2025-11-24 22:08:13.984 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.072 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.075 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.136 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.138 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.197 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.199 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.267 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.279 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.350 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.352 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.431 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.432 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.518 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.520 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:08:14 compute-0 nova_compute[189608]: 2025-11-24 22:08:14.598 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:08:15 compute-0 nova_compute[189608]: 2025-11-24 22:08:15.013 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:08:15 compute-0 nova_compute[189608]: 2025-11-24 22:08:15.016 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5046MB free_disk=72.1863784790039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:08:15 compute-0 nova_compute[189608]: 2025-11-24 22:08:15.017 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:08:15 compute-0 nova_compute[189608]: 2025-11-24 22:08:15.017 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:08:15 compute-0 nova_compute[189608]: 2025-11-24 22:08:15.120 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:08:15 compute-0 nova_compute[189608]: 2025-11-24 22:08:15.122 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:08:15 compute-0 nova_compute[189608]: 2025-11-24 22:08:15.123 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:08:15 compute-0 nova_compute[189608]: 2025-11-24 22:08:15.123 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:08:15 compute-0 nova_compute[189608]: 2025-11-24 22:08:15.218 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:08:15 compute-0 nova_compute[189608]: 2025-11-24 22:08:15.239 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:08:15 compute-0 nova_compute[189608]: 2025-11-24 22:08:15.242 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:08:15 compute-0 nova_compute[189608]: 2025-11-24 22:08:15.243 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:08:15 compute-0 podman[241556]: 2025-11-24 22:08:15.551520079 +0000 UTC m=+0.109237328 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 24 22:08:16 compute-0 nova_compute[189608]: 2025-11-24 22:08:16.245 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:08:16 compute-0 nova_compute[189608]: 2025-11-24 22:08:16.246 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:08:16 compute-0 nova_compute[189608]: 2025-11-24 22:08:16.246 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:08:16 compute-0 nova_compute[189608]: 2025-11-24 22:08:16.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:08:16 compute-0 nova_compute[189608]: 2025-11-24 22:08:16.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.620 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.620 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.620 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.628 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ea741b45-c6b4-41c0-a70f-c752b616faa2', 'name': 'test_0', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.632 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '828f9a8f-602f-4ad5-a0b0-5a48a328d20e', 'name': 'vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.632 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.633 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.633 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.633 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.634 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:08:17.633308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.638 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.642 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.642 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.643 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.643 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.643 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.643 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.644 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.645 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:08:17.643716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:08:17.645795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.677 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.678 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.678 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.715 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.716 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.716 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.717 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:08:17.717574) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.806 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.806 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.806 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.905 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.905 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.906 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.906 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.907 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.908 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 667223647 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.908 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 118019506 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.908 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 84934831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.909 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 1140139100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.909 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 133972753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.910 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 92855613 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.911 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:08:17.907785) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:08:17.911853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.952 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/cpu volume: 35710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.978 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/cpu volume: 62950000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.979 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.979 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.980 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.980 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.980 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:08:17.980195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.980 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.981 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.981 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.982 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.982 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.983 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.983 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.984 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.984 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.984 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.985 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.985 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.986 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.986 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.987 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.987 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.989 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:08:17.984832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:08:17.990104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.990 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.991 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.991 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.991 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 41795584 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.992 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.992 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.994 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.994 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.994 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.994 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 2026985911 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.995 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:08:17.994608) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.995 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 25037348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.996 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.996 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 1334483648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.997 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 12781157 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.997 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.998 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:08:17.999265) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:17.999 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.000 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.000 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.001 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.001 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.001 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.003 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.003 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.003 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.004 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.005 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.005 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:08:18.003595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.006 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.006 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:08:18.006495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.007 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.bytes.delta volume: 4759 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.008 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.010 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:08:18.009280) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.011 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:08:18.012172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.013 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.014 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.015 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.016 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.016 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.016 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.017 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:08:18.014408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.017 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:08:18.017088) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.018 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.019 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:08:18.019090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.019 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.020 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.020 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:08:18.020564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.021 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.bytes volume: 4822 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.021 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.022 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.022 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.bytes.delta volume: 4822 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:08:18.022095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.023 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:08:18.023857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.024 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.024 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/memory.usage volume: 49.12890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.025 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.025 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.025 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.026 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:08:18.025737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.026 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.027 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.027 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:08:18.027565) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.028 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.028 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.028 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.029 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.029 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.030 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.030 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.030 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.030 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.031 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:08:18.031010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.031 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.031 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.032 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.036 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.036 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.036 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:08:18.036 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:08:18 compute-0 nova_compute[189608]: 2025-11-24 22:08:18.176 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:18 compute-0 podman[241581]: 2025-11-24 22:08:18.546401057 +0000 UTC m=+0.103005027 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 22:08:18 compute-0 podman[241580]: 2025-11-24 22:08:18.596566474 +0000 UTC m=+0.158748735 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:08:18 compute-0 nova_compute[189608]: 2025-11-24 22:08:18.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:08:18 compute-0 nova_compute[189608]: 2025-11-24 22:08:18.868 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:19 compute-0 nova_compute[189608]: 2025-11-24 22:08:19.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:08:19 compute-0 nova_compute[189608]: 2025-11-24 22:08:19.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:08:23 compute-0 nova_compute[189608]: 2025-11-24 22:08:23.180 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:23 compute-0 nova_compute[189608]: 2025-11-24 22:08:23.871 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:26 compute-0 podman[241621]: 2025-11-24 22:08:26.562102725 +0000 UTC m=+0.116571182 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:08:28 compute-0 nova_compute[189608]: 2025-11-24 22:08:28.183 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:28 compute-0 nova_compute[189608]: 2025-11-24 22:08:28.874 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:29 compute-0 podman[203795]: time="2025-11-24T22:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:08:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:08:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 24 22:08:31 compute-0 openstack_network_exporter[205945]: ERROR   22:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:08:31 compute-0 openstack_network_exporter[205945]: ERROR   22:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:08:31 compute-0 openstack_network_exporter[205945]: ERROR   22:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:08:31 compute-0 openstack_network_exporter[205945]: ERROR   22:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:08:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:08:31 compute-0 openstack_network_exporter[205945]: ERROR   22:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:08:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:08:33 compute-0 nova_compute[189608]: 2025-11-24 22:08:33.185 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:33 compute-0 podman[241650]: 2025-11-24 22:08:33.556974365 +0000 UTC m=+0.109863546 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 22:08:33 compute-0 nova_compute[189608]: 2025-11-24 22:08:33.876 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:36 compute-0 podman[241669]: 2025-11-24 22:08:36.546196231 +0000 UTC m=+0.098729766 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm)
Nov 24 22:08:38 compute-0 nova_compute[189608]: 2025-11-24 22:08:38.188 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:38 compute-0 nova_compute[189608]: 2025-11-24 22:08:38.879 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:42 compute-0 podman[241691]: 2025-11-24 22:08:42.56510176 +0000 UTC m=+0.104085610 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, tcib_managed=true)
Nov 24 22:08:42 compute-0 podman[241690]: 2025-11-24 22:08:42.579549463 +0000 UTC m=+0.120714190 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, distribution-scope=public, architecture=x86_64, name=ubi9-minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Nov 24 22:08:42 compute-0 podman[241689]: 2025-11-24 22:08:42.582721079 +0000 UTC m=+0.135562464 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, name=ubi9, config_id=edpm, release-0.7.12=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Nov 24 22:08:43 compute-0 nova_compute[189608]: 2025-11-24 22:08:43.192 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:43 compute-0 nova_compute[189608]: 2025-11-24 22:08:43.882 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:46 compute-0 podman[241740]: 2025-11-24 22:08:46.518073632 +0000 UTC m=+0.080036753 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:08:48 compute-0 nova_compute[189608]: 2025-11-24 22:08:48.193 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:48 compute-0 nova_compute[189608]: 2025-11-24 22:08:48.885 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:49 compute-0 podman[241765]: 2025-11-24 22:08:49.517758477 +0000 UTC m=+0.075379120 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:08:49 compute-0 podman[241764]: 2025-11-24 22:08:49.552463081 +0000 UTC m=+0.107154124 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 24 22:08:53 compute-0 nova_compute[189608]: 2025-11-24 22:08:53.195 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:53 compute-0 nova_compute[189608]: 2025-11-24 22:08:53.888 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:08:54.563 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:08:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:08:54.564 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:08:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:08:54.564 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:08:57 compute-0 podman[241808]: 2025-11-24 22:08:57.555242842 +0000 UTC m=+0.106993969 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:08:58 compute-0 nova_compute[189608]: 2025-11-24 22:08:58.199 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:58 compute-0 nova_compute[189608]: 2025-11-24 22:08:58.891 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:08:59 compute-0 podman[203795]: time="2025-11-24T22:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:08:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:08:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4796 "" "Go-http-client/1.1"
Nov 24 22:09:01 compute-0 openstack_network_exporter[205945]: ERROR   22:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:09:01 compute-0 openstack_network_exporter[205945]: ERROR   22:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:09:01 compute-0 openstack_network_exporter[205945]: ERROR   22:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:09:01 compute-0 openstack_network_exporter[205945]: ERROR   22:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:09:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:09:01 compute-0 openstack_network_exporter[205945]: ERROR   22:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:09:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:09:02 compute-0 sshd-session[241833]: Invalid user solv from 193.32.162.145 port 54596
Nov 24 22:09:02 compute-0 sshd-session[241833]: Connection closed by invalid user solv 193.32.162.145 port 54596 [preauth]
Nov 24 22:09:03 compute-0 nova_compute[189608]: 2025-11-24 22:09:03.201 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:03 compute-0 nova_compute[189608]: 2025-11-24 22:09:03.895 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:04 compute-0 podman[241835]: 2025-11-24 22:09:04.539303793 +0000 UTC m=+0.088842934 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 24 22:09:07 compute-0 podman[241855]: 2025-11-24 22:09:07.555674909 +0000 UTC m=+0.097944922 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:09:08 compute-0 nova_compute[189608]: 2025-11-24 22:09:08.203 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:08 compute-0 nova_compute[189608]: 2025-11-24 22:09:08.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:09:08 compute-0 nova_compute[189608]: 2025-11-24 22:09:08.898 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:10 compute-0 nova_compute[189608]: 2025-11-24 22:09:10.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:09:10 compute-0 nova_compute[189608]: 2025-11-24 22:09:10.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:09:11 compute-0 nova_compute[189608]: 2025-11-24 22:09:11.055 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:09:11 compute-0 nova_compute[189608]: 2025-11-24 22:09:11.061 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:09:11 compute-0 nova_compute[189608]: 2025-11-24 22:09:11.063 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:09:12 compute-0 nova_compute[189608]: 2025-11-24 22:09:12.435 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updating instance_info_cache with network_info: [{"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:09:12 compute-0 nova_compute[189608]: 2025-11-24 22:09:12.451 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:09:12 compute-0 nova_compute[189608]: 2025-11-24 22:09:12.452 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:09:13 compute-0 nova_compute[189608]: 2025-11-24 22:09:13.207 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:13 compute-0 podman[241875]: 2025-11-24 22:09:13.526257248 +0000 UTC m=+0.078211458 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., distribution-scope=public, config_id=edpm, vendor=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release-0.7.12=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 24 22:09:13 compute-0 podman[241877]: 2025-11-24 22:09:13.553374749 +0000 UTC m=+0.081231790 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 24 22:09:13 compute-0 podman[241876]: 2025-11-24 22:09:13.571883856 +0000 UTC m=+0.105446572 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 24 22:09:13 compute-0 nova_compute[189608]: 2025-11-24 22:09:13.901 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:14 compute-0 nova_compute[189608]: 2025-11-24 22:09:14.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:09:14 compute-0 nova_compute[189608]: 2025-11-24 22:09:14.821 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:09:14 compute-0 nova_compute[189608]: 2025-11-24 22:09:14.823 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:09:14 compute-0 nova_compute[189608]: 2025-11-24 22:09:14.823 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:09:14 compute-0 nova_compute[189608]: 2025-11-24 22:09:14.824 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:09:14 compute-0 nova_compute[189608]: 2025-11-24 22:09:14.938 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.007 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.009 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.071 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.073 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.146 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.147 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.210 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.222 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.303 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.306 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.367 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.370 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.433 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.435 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.499 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.896 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.898 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5055MB free_disk=72.1863784790039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.899 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.899 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.974 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.976 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.976 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:09:15 compute-0 nova_compute[189608]: 2025-11-24 22:09:15.977 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:09:16 compute-0 nova_compute[189608]: 2025-11-24 22:09:16.046 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:09:16 compute-0 nova_compute[189608]: 2025-11-24 22:09:16.065 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:09:16 compute-0 nova_compute[189608]: 2025-11-24 22:09:16.068 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:09:16 compute-0 nova_compute[189608]: 2025-11-24 22:09:16.068 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:09:17 compute-0 nova_compute[189608]: 2025-11-24 22:09:17.065 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:09:17 compute-0 nova_compute[189608]: 2025-11-24 22:09:17.067 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:09:17 compute-0 nova_compute[189608]: 2025-11-24 22:09:17.068 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:09:17 compute-0 podman[241954]: 2025-11-24 22:09:17.55622176 +0000 UTC m=+0.111106065 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 24 22:09:17 compute-0 nova_compute[189608]: 2025-11-24 22:09:17.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:09:17 compute-0 nova_compute[189608]: 2025-11-24 22:09:17.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:09:18 compute-0 nova_compute[189608]: 2025-11-24 22:09:18.209 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:18 compute-0 nova_compute[189608]: 2025-11-24 22:09:18.905 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:19 compute-0 sshd-session[241976]: Invalid user solana from 45.148.10.240 port 33880
Nov 24 22:09:19 compute-0 sshd-session[241976]: Connection closed by invalid user solana 45.148.10.240 port 33880 [preauth]
Nov 24 22:09:19 compute-0 nova_compute[189608]: 2025-11-24 22:09:19.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:09:19 compute-0 nova_compute[189608]: 2025-11-24 22:09:19.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:09:20 compute-0 podman[241980]: 2025-11-24 22:09:20.530977111 +0000 UTC m=+0.079471756 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:09:20 compute-0 podman[241979]: 2025-11-24 22:09:20.584888183 +0000 UTC m=+0.144829028 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 24 22:09:20 compute-0 nova_compute[189608]: 2025-11-24 22:09:20.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:09:23 compute-0 nova_compute[189608]: 2025-11-24 22:09:23.212 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:23 compute-0 nova_compute[189608]: 2025-11-24 22:09:23.909 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:28 compute-0 nova_compute[189608]: 2025-11-24 22:09:28.217 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:28 compute-0 podman[242024]: 2025-11-24 22:09:28.596562697 +0000 UTC m=+0.135689538 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:09:28 compute-0 nova_compute[189608]: 2025-11-24 22:09:28.913 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:29 compute-0 podman[203795]: time="2025-11-24T22:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:09:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:09:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Nov 24 22:09:31 compute-0 openstack_network_exporter[205945]: ERROR   22:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:09:31 compute-0 openstack_network_exporter[205945]: ERROR   22:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:09:31 compute-0 openstack_network_exporter[205945]: ERROR   22:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:09:31 compute-0 openstack_network_exporter[205945]: ERROR   22:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:09:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:09:31 compute-0 openstack_network_exporter[205945]: ERROR   22:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:09:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:09:33 compute-0 nova_compute[189608]: 2025-11-24 22:09:33.220 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:33 compute-0 nova_compute[189608]: 2025-11-24 22:09:33.917 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:35 compute-0 podman[242046]: 2025-11-24 22:09:35.560171011 +0000 UTC m=+0.112266691 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 24 22:09:38 compute-0 nova_compute[189608]: 2025-11-24 22:09:38.223 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:38 compute-0 podman[242066]: 2025-11-24 22:09:38.579280281 +0000 UTC m=+0.145213641 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0)
Nov 24 22:09:38 compute-0 nova_compute[189608]: 2025-11-24 22:09:38.920 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:43 compute-0 nova_compute[189608]: 2025-11-24 22:09:43.224 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:43 compute-0 nova_compute[189608]: 2025-11-24 22:09:43.925 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:44 compute-0 podman[242084]: 2025-11-24 22:09:44.532616981 +0000 UTC m=+0.085708576 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 24 22:09:44 compute-0 podman[242083]: 2025-11-24 22:09:44.556202444 +0000 UTC m=+0.115797669 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, release-0.7.12=, com.redhat.component=ubi9-container, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, config_id=edpm, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Nov 24 22:09:44 compute-0 podman[242085]: 2025-11-24 22:09:44.576589348 +0000 UTC m=+0.119941785 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 22:09:48 compute-0 nova_compute[189608]: 2025-11-24 22:09:48.225 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:48 compute-0 podman[242142]: 2025-11-24 22:09:48.502221998 +0000 UTC m=+0.061943042 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:09:48 compute-0 nova_compute[189608]: 2025-11-24 22:09:48.929 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:51 compute-0 podman[242166]: 2025-11-24 22:09:51.514894883 +0000 UTC m=+0.073642736 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:09:51 compute-0 podman[242165]: 2025-11-24 22:09:51.55765852 +0000 UTC m=+0.121045436 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 22:09:53 compute-0 nova_compute[189608]: 2025-11-24 22:09:53.229 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:53 compute-0 nova_compute[189608]: 2025-11-24 22:09:53.933 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:09:54.565 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:09:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:09:54.565 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:09:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:09:54.566 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:09:58 compute-0 nova_compute[189608]: 2025-11-24 22:09:58.232 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:58 compute-0 nova_compute[189608]: 2025-11-24 22:09:58.938 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:09:59 compute-0 podman[242206]: 2025-11-24 22:09:59.570262684 +0000 UTC m=+0.111076537 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:09:59 compute-0 podman[203795]: time="2025-11-24T22:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:09:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:09:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Nov 24 22:10:01 compute-0 openstack_network_exporter[205945]: ERROR   22:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:10:01 compute-0 openstack_network_exporter[205945]: ERROR   22:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:10:01 compute-0 openstack_network_exporter[205945]: ERROR   22:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:10:01 compute-0 openstack_network_exporter[205945]: ERROR   22:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:10:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:10:01 compute-0 openstack_network_exporter[205945]: ERROR   22:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:10:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:10:03 compute-0 nova_compute[189608]: 2025-11-24 22:10:03.236 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:03 compute-0 nova_compute[189608]: 2025-11-24 22:10:03.941 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:06 compute-0 podman[242230]: 2025-11-24 22:10:06.561873826 +0000 UTC m=+0.103420599 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 22:10:08 compute-0 nova_compute[189608]: 2025-11-24 22:10:08.240 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:08 compute-0 nova_compute[189608]: 2025-11-24 22:10:08.946 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:09 compute-0 podman[242250]: 2025-11-24 22:10:09.561388093 +0000 UTC m=+0.109853369 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:10:11 compute-0 nova_compute[189608]: 2025-11-24 22:10:11.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:10:11 compute-0 nova_compute[189608]: 2025-11-24 22:10:11.796 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:10:11 compute-0 nova_compute[189608]: 2025-11-24 22:10:11.797 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:10:12 compute-0 nova_compute[189608]: 2025-11-24 22:10:12.418 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:10:12 compute-0 nova_compute[189608]: 2025-11-24 22:10:12.419 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:10:12 compute-0 nova_compute[189608]: 2025-11-24 22:10:12.420 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:10:12 compute-0 nova_compute[189608]: 2025-11-24 22:10:12.420 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:10:13 compute-0 nova_compute[189608]: 2025-11-24 22:10:13.243 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:13 compute-0 nova_compute[189608]: 2025-11-24 22:10:13.949 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:14 compute-0 podman[242270]: 2025-11-24 22:10:14.771646218 +0000 UTC m=+0.086239856 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 22:10:14 compute-0 podman[242268]: 2025-11-24 22:10:14.773478184 +0000 UTC m=+0.098489346 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, release-0.7.12=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.4, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 24 22:10:14 compute-0 podman[242269]: 2025-11-24 22:10:14.77592864 +0000 UTC m=+0.093471900 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public)
Nov 24 22:10:15 compute-0 nova_compute[189608]: 2025-11-24 22:10:15.091 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updating instance_info_cache with network_info: [{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:10:15 compute-0 nova_compute[189608]: 2025-11-24 22:10:15.109 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:10:15 compute-0 nova_compute[189608]: 2025-11-24 22:10:15.110 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:10:15 compute-0 nova_compute[189608]: 2025-11-24 22:10:15.110 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:10:16 compute-0 nova_compute[189608]: 2025-11-24 22:10:16.804 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:10:16 compute-0 nova_compute[189608]: 2025-11-24 22:10:16.806 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:10:16 compute-0 nova_compute[189608]: 2025-11-24 22:10:16.807 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:10:16 compute-0 nova_compute[189608]: 2025-11-24 22:10:16.836 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:10:16 compute-0 nova_compute[189608]: 2025-11-24 22:10:16.837 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:10:16 compute-0 nova_compute[189608]: 2025-11-24 22:10:16.838 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:10:16 compute-0 nova_compute[189608]: 2025-11-24 22:10:16.839 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:10:16 compute-0 nova_compute[189608]: 2025-11-24 22:10:16.953 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.018 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.021 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.093 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.096 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.180 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.181 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.268 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.280 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.352 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.354 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.419 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.422 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.512 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.515 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:10:17 compute-0 nova_compute[189608]: 2025-11-24 22:10:17.591 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.620 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.621 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.621 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56aa31a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.631 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ea741b45-c6b4-41c0-a70f-c752b616faa2', 'name': 'test_0', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.635 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '828f9a8f-602f-4ad5-a0b0-5a48a328d20e', 'name': 'vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.635 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.636 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.636 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.636 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.637 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:10:17.636273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.641 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.646 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.647 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.648 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.648 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:10:17.647708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.649 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:10:17.649386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.690 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.691 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.691 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.722 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.722 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.722 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.722 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.723 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:10:17.723475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.799 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.799 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.800 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.870 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.871 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.871 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.872 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.872 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.872 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.872 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.873 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.873 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 667223647 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.873 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:10:17.872920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.873 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 118019506 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.874 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 84934831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.874 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 1140139100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.874 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 133972753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.875 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 92855613 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.875 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.876 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.876 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.876 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.876 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.876 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.877 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:10:17.876594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.903 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/cpu volume: 37210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.927 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/cpu volume: 182650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.927 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.928 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.928 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.928 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.928 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.928 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.928 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.928 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:10:17.928299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.929 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.929 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.929 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.929 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.929 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.930 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.930 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.930 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:10:17.930815) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.931 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.931 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.931 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.931 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.932 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.932 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.932 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.932 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.932 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.932 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.933 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.933 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:10:17.933036) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.933 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.933 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.933 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.934 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 41820160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.934 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.934 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.934 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.934 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.935 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.935 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.935 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.936 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 2026985911 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.936 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:10:17.935460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.936 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 25037348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.936 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.936 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 1386149484 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.937 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 12781157 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.937 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.938 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.938 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.938 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.938 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.938 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.939 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.939 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.939 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:10:17.938932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.939 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.939 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.939 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.940 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.940 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.940 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.940 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.940 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.941 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:10:17.941019) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.941 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.941 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.941 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.942 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.942 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.942 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.942 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.942 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:10:17.942376) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.942 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.943 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.943 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.943 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.943 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.943 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.943 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.944 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:10:17.943774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.944 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.944 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.945 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.945 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.945 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:10:17.945334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.945 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.946 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.946 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.946 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.946 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.947 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:10:17.946933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.947 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.948 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.948 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.948 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.948 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.949 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.949 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.949 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.949 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:10:17.949147) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.950 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.950 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.950 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.950 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.950 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.950 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.951 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:10:17.950746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.951 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.951 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.951 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.951 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.951 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.952 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.952 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.952 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:10:17.952109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.952 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.952 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.bytes volume: 4892 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.953 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.953 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.953 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.953 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.953 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.953 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:10:17.953534) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.953 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.954 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.954 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.954 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.955 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.955 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:10:17.954909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.955 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.955 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/memory.usage volume: 49.12109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.955 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.957 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.957 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.957 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:10:17.957237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.957 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.958 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.958 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.958 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.958 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.958 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.959 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:10:17.958704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.959 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.959 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.959 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.960 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.960 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.960 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.961 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.961 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.961 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.961 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:10:17.961536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.962 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.962 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.962 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:10:17.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.021 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.023 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5054MB free_disk=72.18643569946289GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.024 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.025 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.211 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.212 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.212 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.212 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.244 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.308 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing inventories for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.389 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating ProviderTree inventory for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.389 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating inventory in ProviderTree for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.404 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing aggregate associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.423 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing trait associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, traits: HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.502 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.525 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.528 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.529 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.798 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.799 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.800 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 22:10:18 compute-0 nova_compute[189608]: 2025-11-24 22:10:18.952 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:19 compute-0 podman[242346]: 2025-11-24 22:10:19.562013427 +0000 UTC m=+0.107561618 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:10:19 compute-0 nova_compute[189608]: 2025-11-24 22:10:19.811 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:10:19 compute-0 nova_compute[189608]: 2025-11-24 22:10:19.812 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:10:19 compute-0 nova_compute[189608]: 2025-11-24 22:10:19.812 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:10:19 compute-0 nova_compute[189608]: 2025-11-24 22:10:19.812 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 22:10:19 compute-0 nova_compute[189608]: 2025-11-24 22:10:19.836 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 22:10:20 compute-0 nova_compute[189608]: 2025-11-24 22:10:20.818 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:10:22 compute-0 podman[242372]: 2025-11-24 22:10:22.596286171 +0000 UTC m=+0.124280496 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 24 22:10:22 compute-0 podman[242371]: 2025-11-24 22:10:22.623032951 +0000 UTC m=+0.162267095 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 24 22:10:23 compute-0 nova_compute[189608]: 2025-11-24 22:10:23.246 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:23 compute-0 nova_compute[189608]: 2025-11-24 22:10:23.957 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:28 compute-0 nova_compute[189608]: 2025-11-24 22:10:28.249 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:28 compute-0 nova_compute[189608]: 2025-11-24 22:10:28.960 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:29 compute-0 podman[203795]: time="2025-11-24T22:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:10:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:10:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Nov 24 22:10:30 compute-0 podman[242415]: 2025-11-24 22:10:30.559867675 +0000 UTC m=+0.092455329 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:10:31 compute-0 openstack_network_exporter[205945]: ERROR   22:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:10:31 compute-0 openstack_network_exporter[205945]: ERROR   22:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:10:31 compute-0 openstack_network_exporter[205945]: ERROR   22:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:10:31 compute-0 openstack_network_exporter[205945]: ERROR   22:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:10:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:10:31 compute-0 openstack_network_exporter[205945]: ERROR   22:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:10:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:10:33 compute-0 nova_compute[189608]: 2025-11-24 22:10:33.252 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:33 compute-0 nova_compute[189608]: 2025-11-24 22:10:33.964 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:37 compute-0 podman[242439]: 2025-11-24 22:10:37.58216391 +0000 UTC m=+0.132695748 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:10:38 compute-0 nova_compute[189608]: 2025-11-24 22:10:38.256 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:38 compute-0 nova_compute[189608]: 2025-11-24 22:10:38.967 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:40 compute-0 podman[242457]: 2025-11-24 22:10:40.53701742 +0000 UTC m=+0.094425770 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 24 22:10:43 compute-0 nova_compute[189608]: 2025-11-24 22:10:43.256 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:43 compute-0 nova_compute[189608]: 2025-11-24 22:10:43.970 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:45 compute-0 podman[242477]: 2025-11-24 22:10:45.533900018 +0000 UTC m=+0.086765313 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, vendor=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc.)
Nov 24 22:10:45 compute-0 podman[242478]: 2025-11-24 22:10:45.542600927 +0000 UTC m=+0.084713888 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350)
Nov 24 22:10:45 compute-0 podman[242479]: 2025-11-24 22:10:45.567649154 +0000 UTC m=+0.092614503 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 24 22:10:48 compute-0 nova_compute[189608]: 2025-11-24 22:10:48.261 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:48 compute-0 nova_compute[189608]: 2025-11-24 22:10:48.974 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:49 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 24 22:10:49 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 24 22:10:49 compute-0 podman[242534]: 2025-11-24 22:10:49.787550837 +0000 UTC m=+0.143800772 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:10:53 compute-0 nova_compute[189608]: 2025-11-24 22:10:53.262 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:53 compute-0 podman[242577]: 2025-11-24 22:10:53.589432034 +0000 UTC m=+0.130649154 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 22:10:53 compute-0 podman[242576]: 2025-11-24 22:10:53.637775043 +0000 UTC m=+0.175736022 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 24 22:10:53 compute-0 nova_compute[189608]: 2025-11-24 22:10:53.979 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:10:54.566 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:10:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:10:54.567 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:10:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:10:54.568 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:10:58 compute-0 nova_compute[189608]: 2025-11-24 22:10:58.264 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:58 compute-0 nova_compute[189608]: 2025-11-24 22:10:58.984 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:10:59 compute-0 podman[203795]: time="2025-11-24T22:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:10:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:10:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4790 "" "Go-http-client/1.1"
Nov 24 22:11:01 compute-0 openstack_network_exporter[205945]: ERROR   22:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:11:01 compute-0 openstack_network_exporter[205945]: ERROR   22:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:11:01 compute-0 openstack_network_exporter[205945]: ERROR   22:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:11:01 compute-0 openstack_network_exporter[205945]: ERROR   22:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:11:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:11:01 compute-0 openstack_network_exporter[205945]: ERROR   22:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:11:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:11:01 compute-0 podman[242619]: 2025-11-24 22:11:01.59401806 +0000 UTC m=+0.134597707 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:11:03 compute-0 nova_compute[189608]: 2025-11-24 22:11:03.266 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:03 compute-0 nova_compute[189608]: 2025-11-24 22:11:03.986 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:08 compute-0 nova_compute[189608]: 2025-11-24 22:11:08.269 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:08 compute-0 podman[242643]: 2025-11-24 22:11:08.543886767 +0000 UTC m=+0.097381132 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:11:08 compute-0 nova_compute[189608]: 2025-11-24 22:11:08.989 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:11 compute-0 podman[242662]: 2025-11-24 22:11:11.558921685 +0000 UTC m=+0.116644359 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true)
Nov 24 22:11:11 compute-0 nova_compute[189608]: 2025-11-24 22:11:11.791 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:11:11 compute-0 sshd-session[242674]: Invalid user sol from 45.148.10.240 port 50164
Nov 24 22:11:11 compute-0 sshd-session[242674]: Connection closed by invalid user sol 45.148.10.240 port 50164 [preauth]
Nov 24 22:11:13 compute-0 nova_compute[189608]: 2025-11-24 22:11:13.273 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:13 compute-0 nova_compute[189608]: 2025-11-24 22:11:13.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:11:13 compute-0 nova_compute[189608]: 2025-11-24 22:11:13.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:11:13 compute-0 nova_compute[189608]: 2025-11-24 22:11:13.992 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:14 compute-0 nova_compute[189608]: 2025-11-24 22:11:14.020 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:11:14 compute-0 nova_compute[189608]: 2025-11-24 22:11:14.022 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:11:14 compute-0 nova_compute[189608]: 2025-11-24 22:11:14.023 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:11:14 compute-0 nova_compute[189608]: 2025-11-24 22:11:14.248 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:14 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:14.251 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:11:14 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:14.253 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:11:15 compute-0 nova_compute[189608]: 2025-11-24 22:11:15.372 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updating instance_info_cache with network_info: [{"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:11:15 compute-0 nova_compute[189608]: 2025-11-24 22:11:15.393 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:11:15 compute-0 nova_compute[189608]: 2025-11-24 22:11:15.395 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:11:16 compute-0 podman[242685]: 2025-11-24 22:11:16.550063843 +0000 UTC m=+0.090992684 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:11:16 compute-0 podman[242683]: 2025-11-24 22:11:16.573057575 +0000 UTC m=+0.122223182 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, build-date=2024-09-18T21:23:30, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public)
Nov 24 22:11:16 compute-0 podman[242684]: 2025-11-24 22:11:16.581399014 +0000 UTC m=+0.118894129 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=)
Nov 24 22:11:16 compute-0 nova_compute[189608]: 2025-11-24 22:11:16.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:11:16 compute-0 nova_compute[189608]: 2025-11-24 22:11:16.818 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:11:16 compute-0 nova_compute[189608]: 2025-11-24 22:11:16.818 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:11:16 compute-0 nova_compute[189608]: 2025-11-24 22:11:16.819 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:11:16 compute-0 nova_compute[189608]: 2025-11-24 22:11:16.829 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:11:16 compute-0 nova_compute[189608]: 2025-11-24 22:11:16.924 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.023 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.025 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.090 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.099 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.198 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.201 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.311 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.320 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.401 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.408 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.467 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.469 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.538 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.539 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:17 compute-0 nova_compute[189608]: 2025-11-24 22:11:17.628 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.108 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.110 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5043MB free_disk=72.18548202514648GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.110 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.111 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.208 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.210 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.211 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.212 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.277 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.281 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.295 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.297 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.297 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:11:18 compute-0 nova_compute[189608]: 2025-11-24 22:11:18.996 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:19 compute-0 nova_compute[189608]: 2025-11-24 22:11:19.298 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:11:19 compute-0 nova_compute[189608]: 2025-11-24 22:11:19.303 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:11:19 compute-0 nova_compute[189608]: 2025-11-24 22:11:19.304 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:11:19 compute-0 nova_compute[189608]: 2025-11-24 22:11:19.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:11:19 compute-0 nova_compute[189608]: 2025-11-24 22:11:19.796 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.216 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.217 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.245 189613 DEBUG nova.compute.manager [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:11:20 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:20.256 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.336 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.337 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.347 189613 DEBUG nova.virt.hardware [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.348 189613 INFO nova.compute.claims [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.512 189613 DEBUG nova.compute.provider_tree [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.543 189613 DEBUG nova.scheduler.client.report [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:11:20 compute-0 podman[242763]: 2025-11-24 22:11:20.552629984 +0000 UTC m=+0.114625997 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.573 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.575 189613 DEBUG nova.compute.manager [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.632 189613 DEBUG nova.compute.manager [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.633 189613 DEBUG nova.network.neutron [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.657 189613 INFO nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.704 189613 DEBUG nova.compute.manager [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.783 189613 DEBUG nova.compute.manager [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.799 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.800 189613 INFO nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Creating image(s)
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.801 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "/var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.801 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.803 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.824 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.825 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.899 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.901 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.903 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:11:20 compute-0 nova_compute[189608]: 2025-11-24 22:11:20.929 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.005 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.006 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed,backing_fmt=raw /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.051 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed,backing_fmt=raw /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.060 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.061 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.126 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.127 189613 DEBUG nova.virt.disk.api [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Checking if we can resize image /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.128 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.198 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.200 189613 DEBUG nova.virt.disk.api [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Cannot resize image /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.201 189613 DEBUG nova.objects.instance [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'migration_context' on Instance uuid 672e3ced-b18a-4ce7-aace-eb5c076ddb88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.214 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "/var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.215 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.216 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.234 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.311 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.318 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.319 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.329 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.403 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.406 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.520 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 1073741824" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.522 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.523 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.614 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.628 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.628 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Ensure instance console log exists: /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.630 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.631 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.632 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.745 189613 DEBUG nova.network.neutron [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Successfully updated port: 2301b73c-6b2a-4a4b-afa2-7d6aa710652b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.766 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.766 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquired lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.767 189613 DEBUG nova.network.neutron [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.840 189613 DEBUG nova.compute.manager [req-c219ef63-901b-42ae-a967-731989ecfef4 req-c8f3e0e4-aed8-407b-956a-ddfd8202528b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Received event network-changed-2301b73c-6b2a-4a4b-afa2-7d6aa710652b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.841 189613 DEBUG nova.compute.manager [req-c219ef63-901b-42ae-a967-731989ecfef4 req-c8f3e0e4-aed8-407b-956a-ddfd8202528b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Refreshing instance network info cache due to event network-changed-2301b73c-6b2a-4a4b-afa2-7d6aa710652b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.841 189613 DEBUG oslo_concurrency.lockutils [req-c219ef63-901b-42ae-a967-731989ecfef4 req-c8f3e0e4-aed8-407b-956a-ddfd8202528b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:11:21 compute-0 nova_compute[189608]: 2025-11-24 22:11:21.910 189613 DEBUG nova.network.neutron [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.740 189613 DEBUG nova.network.neutron [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Updating instance_info_cache with network_info: [{"id": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "address": "fa:16:3e:04:76:e6", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2301b73c-6b", "ovs_interfaceid": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.787 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Releasing lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.788 189613 DEBUG nova.compute.manager [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Instance network_info: |[{"id": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "address": "fa:16:3e:04:76:e6", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2301b73c-6b", "ovs_interfaceid": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.789 189613 DEBUG oslo_concurrency.lockutils [req-c219ef63-901b-42ae-a967-731989ecfef4 req-c8f3e0e4-aed8-407b-956a-ddfd8202528b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.790 189613 DEBUG nova.network.neutron [req-c219ef63-901b-42ae-a967-731989ecfef4 req-c8f3e0e4-aed8-407b-956a-ddfd8202528b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Refreshing network info cache for port 2301b73c-6b2a-4a4b-afa2-7d6aa710652b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.798 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Start _get_guest_xml network_info=[{"id": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "address": "fa:16:3e:04:76:e6", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2301b73c-6b", "ovs_interfaceid": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-24T22:03:25Z,direct_url=<?>,disk_format='qcow2',id=a63b9561-12dc-4c11-858f-aa6fafbed036,min_disk=0,min_ram=0,name='cirros',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-24T22:03:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}], 'ephemerals': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 1, 'encryption_format': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.809 189613 WARNING nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.822 189613 DEBUG nova.virt.libvirt.host [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.823 189613 DEBUG nova.virt.libvirt.host [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.830 189613 DEBUG nova.virt.libvirt.host [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.831 189613 DEBUG nova.virt.libvirt.host [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.833 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.833 189613 DEBUG nova.virt.hardware [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:03:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-24T22:03:25Z,direct_url=<?>,disk_format='qcow2',id=a63b9561-12dc-4c11-858f-aa6fafbed036,min_disk=0,min_ram=0,name='cirros',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-24T22:03:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.834 189613 DEBUG nova.virt.hardware [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.834 189613 DEBUG nova.virt.hardware [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.834 189613 DEBUG nova.virt.hardware [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.835 189613 DEBUG nova.virt.hardware [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.835 189613 DEBUG nova.virt.hardware [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.836 189613 DEBUG nova.virt.hardware [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.836 189613 DEBUG nova.virt.hardware [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.837 189613 DEBUG nova.virt.hardware [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.837 189613 DEBUG nova.virt.hardware [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.837 189613 DEBUG nova.virt.hardware [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.843 189613 DEBUG nova.virt.libvirt.vif [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:11:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw',id=3,image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b438824c-ce52-4539-9db6-355e0ca018db'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='309342b7e3e849b2a5dd56651d8fa068',ramdisk_id='',reservation_id='r-t3n2k8ox',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:11:20Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04Mjk3Mzg2MTEwNzA2NDc0NzA1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyOTczODYxMTA3MDY0NzQ3MDU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI5NzM4NjExMDcwNjQ3NDcwNT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyOTczODYxMTA3MDY0NzQ3MDU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04Mjk3Mzg2MTEwNzA2NDc0NzA1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04Mjk3Mzg2MTEwNzA2NDc0NzA1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Nov 24 22:11:22 compute-0 nova_compute[189608]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI5NzM4NjExMDcwNjQ3NDcwNT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyOTczODYxMTA3MDY0NzQ3MDU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04Mjk3Mzg2MTEwNzA2NDc0NzA1PT0tLQo=',user_id='572aaac113f54af8a894707849aed6bf',uuid=672e3ced-b18a-4ce7-aace-eb5c076ddb88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "address": "fa:16:3e:04:76:e6", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2301b73c-6b", "ovs_interfaceid": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.844 189613 DEBUG nova.network.os_vif_util [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converting VIF {"id": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "address": "fa:16:3e:04:76:e6", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2301b73c-6b", "ovs_interfaceid": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.845 189613 DEBUG nova.network.os_vif_util [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:76:e6,bridge_name='br-int',has_traffic_filtering=True,id=2301b73c-6b2a-4a4b-afa2-7d6aa710652b,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2301b73c-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.847 189613 DEBUG nova.objects.instance [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'pci_devices' on Instance uuid 672e3ced-b18a-4ce7-aace-eb5c076ddb88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.865 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:11:22 compute-0 nova_compute[189608]:   <uuid>672e3ced-b18a-4ce7-aace-eb5c076ddb88</uuid>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   <name>instance-00000003</name>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   <memory>524288</memory>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <nova:name>vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw</nova:name>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:11:22</nova:creationTime>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <nova:flavor name="m1.small">
Nov 24 22:11:22 compute-0 nova_compute[189608]:         <nova:memory>512</nova:memory>
Nov 24 22:11:22 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:11:22 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:11:22 compute-0 nova_compute[189608]:         <nova:ephemeral>1</nova:ephemeral>
Nov 24 22:11:22 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:11:22 compute-0 nova_compute[189608]:         <nova:user uuid="572aaac113f54af8a894707849aed6bf">admin</nova:user>
Nov 24 22:11:22 compute-0 nova_compute[189608]:         <nova:project uuid="309342b7e3e849b2a5dd56651d8fa068">admin</nova:project>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="a63b9561-12dc-4c11-858f-aa6fafbed036"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:11:22 compute-0 nova_compute[189608]:         <nova:port uuid="2301b73c-6b2a-4a4b-afa2-7d6aa710652b">
Nov 24 22:11:22 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="192.168.0.182" ipVersion="4"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <system>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <entry name="serial">672e3ced-b18a-4ce7-aace-eb5c076ddb88</entry>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <entry name="uuid">672e3ced-b18a-4ce7-aace-eb5c076ddb88</entry>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     </system>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   <os>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   </os>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   <features>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   </features>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <target dev="vdb" bus="virtio"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.config"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:04:76:e6"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <target dev="tap2301b73c-6b"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/console.log" append="off"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <video>
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     </video>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:11:22 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:11:22 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:11:22 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:11:22 compute-0 nova_compute[189608]: </domain>
Nov 24 22:11:22 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.866 189613 DEBUG nova.compute.manager [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Preparing to wait for external event network-vif-plugged-2301b73c-6b2a-4a4b-afa2-7d6aa710652b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.867 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.867 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.867 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.868 189613 DEBUG nova.virt.libvirt.vif [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:11:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw',id=3,image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b438824c-ce52-4539-9db6-355e0ca018db'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='309342b7e3e849b2a5dd56651d8fa068',ramdisk_id='',reservation_id='r-t3n2k8ox',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:11:20Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04Mjk3Mzg2MTEwNzA2NDc0NzA1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyOTczODYxMTA3MDY0NzQ3MDU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI5NzM4NjExMDcwNjQ3NDcwNT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyOTczODYxMTA3MDY0NzQ3MDU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04Mjk3Mzg2MTEwNzA2NDc0NzA1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04Mjk3Mzg2MTEwNzA2NDc0NzA1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Nov 24 22:11:22 compute-0 nova_compute[189608]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI5NzM4NjExMDcwNjQ3NDcwNT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyOTczODYxMTA3MDY0NzQ3MDU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04Mjk3Mzg2MTEwNzA2NDc0NzA1PT0tLQo=',user_id='572aaac113f54af8a894707849aed6bf',uuid=672e3ced-b18a-4ce7-aace-eb5c076ddb88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "address": "fa:16:3e:04:76:e6", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2301b73c-6b", "ovs_interfaceid": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.869 189613 DEBUG nova.network.os_vif_util [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converting VIF {"id": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "address": "fa:16:3e:04:76:e6", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2301b73c-6b", "ovs_interfaceid": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.870 189613 DEBUG nova.network.os_vif_util [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:76:e6,bridge_name='br-int',has_traffic_filtering=True,id=2301b73c-6b2a-4a4b-afa2-7d6aa710652b,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2301b73c-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.870 189613 DEBUG os_vif [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:76:e6,bridge_name='br-int',has_traffic_filtering=True,id=2301b73c-6b2a-4a4b-afa2-7d6aa710652b,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2301b73c-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.871 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.871 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.872 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.876 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.876 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2301b73c-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.876 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2301b73c-6b, col_values=(('external_ids', {'iface-id': '2301b73c-6b2a-4a4b-afa2-7d6aa710652b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:76:e6', 'vm-uuid': '672e3ced-b18a-4ce7-aace-eb5c076ddb88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.878 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:22 compute-0 NetworkManager[56413]: <info>  [1764022282.8806] manager: (tap2301b73c-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.881 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.893 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.894 189613 INFO os_vif [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:76:e6,bridge_name='br-int',has_traffic_filtering=True,id=2301b73c-6b2a-4a4b-afa2-7d6aa710652b,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2301b73c-6b')
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.968 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.969 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.969 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.970 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No VIF found with MAC fa:16:3e:04:76:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:11:22 compute-0 nova_compute[189608]: 2025-11-24 22:11:22.970 189613 INFO nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Using config drive
Nov 24 22:11:23 compute-0 rsyslogd[237036]: message too long (8192) with configured size 8096, begin of message is: 2025-11-24 22:11:22.843 189613 DEBUG nova.virt.libvirt.vif [None req-f19c3d44-97 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 24 22:11:23 compute-0 rsyslogd[237036]: message too long (8192) with configured size 8096, begin of message is: 2025-11-24 22:11:22.868 189613 DEBUG nova.virt.libvirt.vif [None req-f19c3d44-97 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 24 22:11:23 compute-0 nova_compute[189608]: 2025-11-24 22:11:23.283 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:23 compute-0 nova_compute[189608]: 2025-11-24 22:11:23.758 189613 INFO nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Creating config drive at /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.config
Nov 24 22:11:23 compute-0 nova_compute[189608]: 2025-11-24 22:11:23.765 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4a_6q6m3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:11:23 compute-0 nova_compute[189608]: 2025-11-24 22:11:23.905 189613 DEBUG oslo_concurrency.processutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4a_6q6m3" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:11:24 compute-0 kernel: tap2301b73c-6b: entered promiscuous mode
Nov 24 22:11:24 compute-0 NetworkManager[56413]: <info>  [1764022284.0036] manager: (tap2301b73c-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.007 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:24 compute-0 ovn_controller[97889]: 2025-11-24T22:11:24Z|00040|binding|INFO|Claiming lport 2301b73c-6b2a-4a4b-afa2-7d6aa710652b for this chassis.
Nov 24 22:11:24 compute-0 ovn_controller[97889]: 2025-11-24T22:11:24Z|00041|binding|INFO|2301b73c-6b2a-4a4b-afa2-7d6aa710652b: Claiming fa:16:3e:04:76:e6 192.168.0.182
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.013 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.026 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:76:e6 192.168.0.182'], port_security=['fa:16:3e:04:76:e6 192.168.0.182'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-mikdi7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-port-wwdx6emlbsja', 'neutron:cidrs': '192.168.0.182/24', 'neutron:device_id': '672e3ced-b18a-4ce7-aace-eb5c076ddb88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-mikdi7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-port-wwdx6emlbsja', 'neutron:project_id': '309342b7e3e849b2a5dd56651d8fa068', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c15160ba-5e91-4c5e-8c90-4266712c07a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b82cef4e-b720-4309-9703-51bb202fedba, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=2301b73c-6b2a-4a4b-afa2-7d6aa710652b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.027 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 2301b73c-6b2a-4a4b-afa2-7d6aa710652b in datapath 1d1b3625-954d-4d8b-8b3f-323c25d9b42a bound to our chassis
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.028 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d1b3625-954d-4d8b-8b3f-323c25d9b42a
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.033 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:24 compute-0 ovn_controller[97889]: 2025-11-24T22:11:24Z|00042|binding|INFO|Setting lport 2301b73c-6b2a-4a4b-afa2-7d6aa710652b ovn-installed in OVS
Nov 24 22:11:24 compute-0 ovn_controller[97889]: 2025-11-24T22:11:24Z|00043|binding|INFO|Setting lport 2301b73c-6b2a-4a4b-afa2-7d6aa710652b up in Southbound
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.046 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[e94d1812-1956-4db6-862b-3f7b6a2c4c9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:11:24 compute-0 systemd-machined[155884]: New machine qemu-3-instance-00000003.
Nov 24 22:11:24 compute-0 systemd-udevd[242869]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:11:24 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.087 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e73d8c-264b-4f7a-867e-cb86d80dacd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.092 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[5e51e654-10c0-4f87-a811-c3c642237827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:11:24 compute-0 podman[242826]: 2025-11-24 22:11:24.10304683 +0000 UTC m=+0.113297096 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Nov 24 22:11:24 compute-0 NetworkManager[56413]: <info>  [1764022284.1086] device (tap2301b73c-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:11:24 compute-0 NetworkManager[56413]: <info>  [1764022284.1096] device (tap2301b73c-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.124 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[3964a45a-84cd-4755-9c5f-ff94d89ffc16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.147 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[1116f682-52d3-4961-98d7-f3d27dceda07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d1b3625-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:e7:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373788, 'reachable_time': 30662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242887, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.164 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[77f32e8b-0b6e-4037-aa92-9653b4d34da0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1d1b3625-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373805, 'tstamp': 373805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242891, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap1d1b3625-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373810, 'tstamp': 373810}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242891, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.166 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d1b3625-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.168 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.169 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.170 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d1b3625-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.171 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.171 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d1b3625-90, col_values=(('external_ids', {'iface-id': '13073b7d-8165-42cd-87f4-fb1eb15a5b94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:11:24 compute-0 podman[242825]: 2025-11-24 22:11:24.171805652 +0000 UTC m=+0.190727598 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:11:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:24.171 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.268 189613 DEBUG nova.compute.manager [req-a996689f-6b14-4d1e-b0fd-dbafc31053ca req-53ed35f9-f129-4014-a53f-fcd4e50736c9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Received event network-vif-plugged-2301b73c-6b2a-4a4b-afa2-7d6aa710652b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.269 189613 DEBUG oslo_concurrency.lockutils [req-a996689f-6b14-4d1e-b0fd-dbafc31053ca req-53ed35f9-f129-4014-a53f-fcd4e50736c9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.276 189613 DEBUG oslo_concurrency.lockutils [req-a996689f-6b14-4d1e-b0fd-dbafc31053ca req-53ed35f9-f129-4014-a53f-fcd4e50736c9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.277 189613 DEBUG oslo_concurrency.lockutils [req-a996689f-6b14-4d1e-b0fd-dbafc31053ca req-53ed35f9-f129-4014-a53f-fcd4e50736c9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.278 189613 DEBUG nova.compute.manager [req-a996689f-6b14-4d1e-b0fd-dbafc31053ca req-53ed35f9-f129-4014-a53f-fcd4e50736c9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Processing event network-vif-plugged-2301b73c-6b2a-4a4b-afa2-7d6aa710652b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.333 189613 DEBUG nova.network.neutron [req-c219ef63-901b-42ae-a967-731989ecfef4 req-c8f3e0e4-aed8-407b-956a-ddfd8202528b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Updated VIF entry in instance network info cache for port 2301b73c-6b2a-4a4b-afa2-7d6aa710652b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.334 189613 DEBUG nova.network.neutron [req-c219ef63-901b-42ae-a967-731989ecfef4 req-c8f3e0e4-aed8-407b-956a-ddfd8202528b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Updating instance_info_cache with network_info: [{"id": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "address": "fa:16:3e:04:76:e6", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2301b73c-6b", "ovs_interfaceid": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.354 189613 DEBUG oslo_concurrency.lockutils [req-c219ef63-901b-42ae-a967-731989ecfef4 req-c8f3e0e4-aed8-407b-956a-ddfd8202528b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.434 189613 DEBUG nova.compute.manager [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.435 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764022284.4334774, 672e3ced-b18a-4ce7-aace-eb5c076ddb88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.435 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] VM Started (Lifecycle Event)
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.460 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.462 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.481 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.484 189613 INFO nova.virt.libvirt.driver [-] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Instance spawned successfully.
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.485 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.521 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.521 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764022284.4336407, 672e3ced-b18a-4ce7-aace-eb5c076ddb88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.522 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] VM Paused (Lifecycle Event)
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.535 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.536 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.537 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.538 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.539 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.540 189613 DEBUG nova.virt.libvirt.driver [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.569 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.577 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764022284.4427733, 672e3ced-b18a-4ce7-aace-eb5c076ddb88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.578 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] VM Resumed (Lifecycle Event)
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.603 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.610 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.616 189613 INFO nova.compute.manager [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Took 3.83 seconds to spawn the instance on the hypervisor.
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.617 189613 DEBUG nova.compute.manager [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.631 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.697 189613 INFO nova.compute.manager [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Took 4.39 seconds to build instance.
Nov 24 22:11:24 compute-0 nova_compute[189608]: 2025-11-24 22:11:24.719 189613 DEBUG oslo_concurrency.lockutils [None req-f19c3d44-97fd-4ec5-8d78-e5ffa42db25a 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:11:26 compute-0 nova_compute[189608]: 2025-11-24 22:11:26.355 189613 DEBUG nova.compute.manager [req-a1d35a50-2ab3-4e50-9ded-da5ae5e671e1 req-f62b0625-f4d5-460b-b234-379123e72c28 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Received event network-vif-plugged-2301b73c-6b2a-4a4b-afa2-7d6aa710652b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:11:26 compute-0 nova_compute[189608]: 2025-11-24 22:11:26.357 189613 DEBUG oslo_concurrency.lockutils [req-a1d35a50-2ab3-4e50-9ded-da5ae5e671e1 req-f62b0625-f4d5-460b-b234-379123e72c28 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:11:26 compute-0 nova_compute[189608]: 2025-11-24 22:11:26.358 189613 DEBUG oslo_concurrency.lockutils [req-a1d35a50-2ab3-4e50-9ded-da5ae5e671e1 req-f62b0625-f4d5-460b-b234-379123e72c28 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:11:26 compute-0 nova_compute[189608]: 2025-11-24 22:11:26.359 189613 DEBUG oslo_concurrency.lockutils [req-a1d35a50-2ab3-4e50-9ded-da5ae5e671e1 req-f62b0625-f4d5-460b-b234-379123e72c28 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:11:26 compute-0 nova_compute[189608]: 2025-11-24 22:11:26.360 189613 DEBUG nova.compute.manager [req-a1d35a50-2ab3-4e50-9ded-da5ae5e671e1 req-f62b0625-f4d5-460b-b234-379123e72c28 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] No waiting events found dispatching network-vif-plugged-2301b73c-6b2a-4a4b-afa2-7d6aa710652b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:11:26 compute-0 nova_compute[189608]: 2025-11-24 22:11:26.361 189613 WARNING nova.compute.manager [req-a1d35a50-2ab3-4e50-9ded-da5ae5e671e1 req-f62b0625-f4d5-460b-b234-379123e72c28 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Received unexpected event network-vif-plugged-2301b73c-6b2a-4a4b-afa2-7d6aa710652b for instance with vm_state active and task_state None.
Nov 24 22:11:27 compute-0 nova_compute[189608]: 2025-11-24 22:11:27.881 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:28 compute-0 nova_compute[189608]: 2025-11-24 22:11:28.286 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:29 compute-0 podman[203795]: time="2025-11-24T22:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:11:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:11:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Nov 24 22:11:31 compute-0 openstack_network_exporter[205945]: ERROR   22:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:11:31 compute-0 openstack_network_exporter[205945]: ERROR   22:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:11:31 compute-0 openstack_network_exporter[205945]: ERROR   22:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:11:31 compute-0 openstack_network_exporter[205945]: ERROR   22:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:11:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:11:31 compute-0 openstack_network_exporter[205945]: ERROR   22:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:11:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:11:32 compute-0 podman[242900]: 2025-11-24 22:11:32.521807283 +0000 UTC m=+0.078237159 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:11:32 compute-0 nova_compute[189608]: 2025-11-24 22:11:32.887 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:33 compute-0 nova_compute[189608]: 2025-11-24 22:11:33.290 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:37 compute-0 nova_compute[189608]: 2025-11-24 22:11:37.893 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:38 compute-0 nova_compute[189608]: 2025-11-24 22:11:38.293 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:39 compute-0 podman[242923]: 2025-11-24 22:11:39.553981924 +0000 UTC m=+0.109030303 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:11:42 compute-0 podman[242942]: 2025-11-24 22:11:42.550497927 +0000 UTC m=+0.107371321 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 24 22:11:42 compute-0 nova_compute[189608]: 2025-11-24 22:11:42.898 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:43 compute-0 nova_compute[189608]: 2025-11-24 22:11:43.298 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:47 compute-0 podman[242964]: 2025-11-24 22:11:47.584223975 +0000 UTC m=+0.115228835 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Nov 24 22:11:47 compute-0 podman[242962]: 2025-11-24 22:11:47.584901266 +0000 UTC m=+0.137232488 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, architecture=x86_64, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler)
Nov 24 22:11:47 compute-0 podman[242963]: 2025-11-24 22:11:47.584233585 +0000 UTC m=+0.120603392 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 24 22:11:47 compute-0 nova_compute[189608]: 2025-11-24 22:11:47.914 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:48 compute-0 nova_compute[189608]: 2025-11-24 22:11:48.303 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:51 compute-0 podman[243017]: 2025-11-24 22:11:51.499610604 +0000 UTC m=+0.055976287 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:11:52 compute-0 nova_compute[189608]: 2025-11-24 22:11:52.916 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:53 compute-0 nova_compute[189608]: 2025-11-24 22:11:53.304 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:54 compute-0 ovn_controller[97889]: 2025-11-24T22:11:54Z|00044|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 24 22:11:54 compute-0 podman[243042]: 2025-11-24 22:11:54.529287266 +0000 UTC m=+0.081564271 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:11:54 compute-0 podman[243041]: 2025-11-24 22:11:54.561758043 +0000 UTC m=+0.122070707 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:11:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:54.567 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:11:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:54.568 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:11:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:11:54.568 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:11:57 compute-0 nova_compute[189608]: 2025-11-24 22:11:57.921 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:58 compute-0 nova_compute[189608]: 2025-11-24 22:11:58.307 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:11:58 compute-0 ovn_controller[97889]: 2025-11-24T22:11:58Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:76:e6 192.168.0.182
Nov 24 22:11:58 compute-0 ovn_controller[97889]: 2025-11-24T22:11:58Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:76:e6 192.168.0.182
Nov 24 22:11:59 compute-0 podman[203795]: time="2025-11-24T22:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:11:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:11:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Nov 24 22:12:01 compute-0 openstack_network_exporter[205945]: ERROR   22:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:12:01 compute-0 openstack_network_exporter[205945]: ERROR   22:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:12:01 compute-0 openstack_network_exporter[205945]: ERROR   22:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:12:01 compute-0 openstack_network_exporter[205945]: ERROR   22:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:12:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:12:01 compute-0 openstack_network_exporter[205945]: ERROR   22:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:12:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:12:02 compute-0 nova_compute[189608]: 2025-11-24 22:12:02.926 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:03 compute-0 nova_compute[189608]: 2025-11-24 22:12:03.311 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:03 compute-0 podman[243105]: 2025-11-24 22:12:03.533919476 +0000 UTC m=+0.097976320 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:12:07 compute-0 nova_compute[189608]: 2025-11-24 22:12:07.930 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:08 compute-0 nova_compute[189608]: 2025-11-24 22:12:08.317 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:09 compute-0 sshd-session[243130]: Invalid user solv from 193.32.162.145 port 37072
Nov 24 22:12:09 compute-0 podman[243132]: 2025-11-24 22:12:09.935552012 +0000 UTC m=+0.091563490 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 24 22:12:09 compute-0 sshd-session[243130]: Connection closed by invalid user solv 193.32.162.145 port 37072 [preauth]
Nov 24 22:12:12 compute-0 nova_compute[189608]: 2025-11-24 22:12:12.934 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:13 compute-0 nova_compute[189608]: 2025-11-24 22:12:13.321 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:13 compute-0 podman[243153]: 2025-11-24 22:12:13.571017194 +0000 UTC m=+0.116157155 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 24 22:12:13 compute-0 nova_compute[189608]: 2025-11-24 22:12:13.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:12:13 compute-0 nova_compute[189608]: 2025-11-24 22:12:13.797 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:12:13 compute-0 nova_compute[189608]: 2025-11-24 22:12:13.797 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:12:14 compute-0 nova_compute[189608]: 2025-11-24 22:12:14.030 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:12:14 compute-0 nova_compute[189608]: 2025-11-24 22:12:14.032 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:12:14 compute-0 nova_compute[189608]: 2025-11-24 22:12:14.033 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:12:14 compute-0 nova_compute[189608]: 2025-11-24 22:12:14.034 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:12:16 compute-0 nova_compute[189608]: 2025-11-24 22:12:16.366 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updating instance_info_cache with network_info: [{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:12:16 compute-0 nova_compute[189608]: 2025-11-24 22:12:16.381 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:12:16 compute-0 nova_compute[189608]: 2025-11-24 22:12:16.381 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.621 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.622 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.622 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.623 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.629 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 672e3ced-b18a-4ce7-aace-eb5c076ddb88 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 24 22:12:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:17.631 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/672e3ced-b18a-4ce7-aace-eb5c076ddb88 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}64060fdce0de69c84bc4ec1fbe9dbdfbc4dd57ffec34862b3445d9841d3967a2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 24 22:12:17 compute-0 nova_compute[189608]: 2025-11-24 22:12:17.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:12:17 compute-0 nova_compute[189608]: 2025-11-24 22:12:17.819 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:12:17 compute-0 nova_compute[189608]: 2025-11-24 22:12:17.820 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:12:17 compute-0 nova_compute[189608]: 2025-11-24 22:12:17.821 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:12:17 compute-0 nova_compute[189608]: 2025-11-24 22:12:17.821 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:12:17 compute-0 nova_compute[189608]: 2025-11-24 22:12:17.936 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:17 compute-0 nova_compute[189608]: 2025-11-24 22:12:17.942 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:12:17 compute-0 podman[243175]: 2025-11-24 22:12:17.99500679 +0000 UTC m=+0.100695814 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 22:12:18 compute-0 podman[243174]: 2025-11-24 22:12:18.000427509 +0000 UTC m=+0.108863598 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, container_name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543)
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.011 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.012 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:12:18 compute-0 podman[243176]: 2025-11-24 22:12:18.024258809 +0000 UTC m=+0.121115139 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi)
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.070 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.071 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.134 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.136 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.211 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.225 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.298 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.299 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.320 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.327 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Mon, 24 Nov 2025 22:12:17 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-77573aaa-80c9-4565-bb60-5f8f2489684d x-openstack-request-id: req-77573aaa-80c9-4565-bb60-5f8f2489684d _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.328 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "672e3ced-b18a-4ce7-aace-eb5c076ddb88", "name": "vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw", "status": "ACTIVE", "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "user_id": "572aaac113f54af8a894707849aed6bf", "metadata": {"metering.server_group": "b438824c-ce52-4539-9db6-355e0ca018db"}, "hostId": "138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a", "image": {"id": "a63b9561-12dc-4c11-858f-aa6fafbed036", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/a63b9561-12dc-4c11-858f-aa6fafbed036"}]}, "flavor": {"id": "cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b"}]}, "created": "2025-11-24T22:11:18Z", "updated": "2025-11-24T22:11:24Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.182", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:04:76:e6"}, {"version": 4, "addr": "192.168.122.209", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:04:76:e6"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/672e3ced-b18a-4ce7-aace-eb5c076ddb88"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/672e3ced-b18a-4ce7-aace-eb5c076ddb88"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-24T22:11:24.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.328 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/672e3ced-b18a-4ce7-aace-eb5c076ddb88 used request id req-77573aaa-80c9-4565-bb60-5f8f2489684d request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.329 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '672e3ced-b18a-4ce7-aace-eb5c076ddb88', 'name': 'vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.332 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ea741b45-c6b4-41c0-a70f-c752b616faa2', 'name': 'test_0', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.334 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '828f9a8f-602f-4ad5-a0b0-5a48a328d20e', 'name': 'vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.334 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:12:18.335037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.338 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 672e3ced-b18a-4ce7-aace-eb5c076ddb88 / tap2301b73c-6b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.338 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.342 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.345 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.346 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.346 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.346 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.346 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.347 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.347 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.347 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.348 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.349 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:12:18.346636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.349 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:12:18.348048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.374 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.375 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.375 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.383 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.385 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.396 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.397 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.397 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.418 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.419 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.419 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.420 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.421 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.421 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.421 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.421 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:12:18.421718) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.485 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.487 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.493 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.494 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.495 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.573 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.592 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.593 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.594 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.594 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.670 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.672 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.687 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.687 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.688 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.689 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.689 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.690 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.690 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.690 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.690 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.latency volume: 822487867 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.691 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.latency volume: 92574229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:12:18.690318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.692 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.latency volume: 84915884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.692 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 667223647 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.693 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 118019506 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.693 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 84934831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.694 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 1140139100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.694 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 133972753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.695 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 92855613 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.696 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.696 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.696 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.697 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:12:18.696929) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.730 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/cpu volume: 33880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.753 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.755 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.759 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/cpu volume: 38910000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.785 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/cpu volume: 303160000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.786 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.786 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.786 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.786 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.787 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.787 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.787 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.789 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:12:18.787044) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.788 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.789 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.790 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.790 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.791 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.791 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.791 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.792 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.793 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.793 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.793 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.793 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.794 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.794 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.794 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.795 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.795 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.796 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.796 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.797 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.797 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.798 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.799 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:12:18.793735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.799 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.799 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.800 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.800 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.800 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:12:18.800252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.800 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.bytes volume: 41742336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.801 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.802 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.802 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.802 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.803 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.803 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 41820160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.803 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.804 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.805 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.805 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.805 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.806 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.806 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.806 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.latency volume: 2762378939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.806 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.latency volume: 9784234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.807 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.807 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 2026985911 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.808 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 25037348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.808 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.809 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 1386149484 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.809 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 12781157 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.810 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.810 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.811 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:12:18.806217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.812 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.812 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.812 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.812 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.812 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.requests volume: 225 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.813 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.813 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:12:18.812668) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.814 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.814 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.815 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.815 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.816 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.816 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.816 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.817 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.818 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.818 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.818 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.818 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.819 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.819 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:12:18.818765) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.820 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.820 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.821 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.821 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.821 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.821 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.822 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.822 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.822 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.822 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.823 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.824 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.824 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.824 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.825 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:12:18.822182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.825 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.825 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.826 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.827 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.827 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:12:18.825957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.828 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.828 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.828 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.828 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.828 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw>]
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.829 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.829 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.830 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.830 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.830 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.830 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.830 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.830 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.830 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.831 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.831 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.832 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.832 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.832 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.832 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.832 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.832 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.833 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.833 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.834 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.834 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.834 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.834 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.835 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-24T22:12:18.828253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:12:18.829250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:12:18.830582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:12:18.832492) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:12:18.834487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.836 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.837 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.837 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.837 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.837 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.bytes volume: 1906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.837 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.838 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.bytes volume: 4962 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:12:18.837241) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.839 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.839 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.839 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.839 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.840 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.840 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:12:18.839920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.841 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.841 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.841 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.841 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.841 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.842 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.842 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/memory.usage volume: 49.66015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.842 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.842 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/memory.usage volume: 49.12109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:12:18.842013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.843 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.844 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.844 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.844 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.844 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.844 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.844 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw>]
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.845 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.845 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.845 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.845 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.845 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.845 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.846 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.846 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.846 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.846 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.847 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.847 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.847 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.847 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.847 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.848 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.848 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.848 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.848 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.849 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.849 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.850 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-24T22:12:18.844275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:12:18.845331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.851 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:12:18.847128) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.851 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.851 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.851 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.851 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.851 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.852 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.852 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets volume: 44 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:12:18.851735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.853 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:12:18.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:12:18 compute-0 nova_compute[189608]: 2025-11-24 22:12:18.899 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:12:19 compute-0 nova_compute[189608]: 2025-11-24 22:12:19.255 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:12:19 compute-0 nova_compute[189608]: 2025-11-24 22:12:19.256 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4826MB free_disk=72.16330337524414GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:12:19 compute-0 nova_compute[189608]: 2025-11-24 22:12:19.257 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:12:19 compute-0 nova_compute[189608]: 2025-11-24 22:12:19.257 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:12:19 compute-0 nova_compute[189608]: 2025-11-24 22:12:19.324 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:12:19 compute-0 nova_compute[189608]: 2025-11-24 22:12:19.324 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:12:19 compute-0 nova_compute[189608]: 2025-11-24 22:12:19.325 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 672e3ced-b18a-4ce7-aace-eb5c076ddb88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:12:19 compute-0 nova_compute[189608]: 2025-11-24 22:12:19.325 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:12:19 compute-0 nova_compute[189608]: 2025-11-24 22:12:19.325 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:12:19 compute-0 nova_compute[189608]: 2025-11-24 22:12:19.388 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:12:19 compute-0 nova_compute[189608]: 2025-11-24 22:12:19.404 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:12:19 compute-0 nova_compute[189608]: 2025-11-24 22:12:19.426 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:12:19 compute-0 nova_compute[189608]: 2025-11-24 22:12:19.426 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:12:21 compute-0 nova_compute[189608]: 2025-11-24 22:12:21.421 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:12:21 compute-0 nova_compute[189608]: 2025-11-24 22:12:21.422 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:12:21 compute-0 nova_compute[189608]: 2025-11-24 22:12:21.423 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:12:21 compute-0 nova_compute[189608]: 2025-11-24 22:12:21.423 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:12:21 compute-0 nova_compute[189608]: 2025-11-24 22:12:21.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:12:22 compute-0 podman[243268]: 2025-11-24 22:12:22.559111354 +0000 UTC m=+0.115320478 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:12:22 compute-0 nova_compute[189608]: 2025-11-24 22:12:22.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:12:22 compute-0 nova_compute[189608]: 2025-11-24 22:12:22.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:12:22 compute-0 nova_compute[189608]: 2025-11-24 22:12:22.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:12:22 compute-0 nova_compute[189608]: 2025-11-24 22:12:22.939 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:23 compute-0 nova_compute[189608]: 2025-11-24 22:12:23.325 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:25 compute-0 podman[243293]: 2025-11-24 22:12:25.595742995 +0000 UTC m=+0.131692599 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 22:12:25 compute-0 podman[243292]: 2025-11-24 22:12:25.614629032 +0000 UTC m=+0.157169681 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 24 22:12:27 compute-0 nova_compute[189608]: 2025-11-24 22:12:27.948 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:28 compute-0 nova_compute[189608]: 2025-11-24 22:12:28.329 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:29 compute-0 podman[203795]: time="2025-11-24T22:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:12:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:12:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 24 22:12:31 compute-0 openstack_network_exporter[205945]: ERROR   22:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:12:31 compute-0 openstack_network_exporter[205945]: ERROR   22:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:12:31 compute-0 openstack_network_exporter[205945]: ERROR   22:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:12:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:12:31 compute-0 openstack_network_exporter[205945]: ERROR   22:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:12:31 compute-0 openstack_network_exporter[205945]: ERROR   22:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:12:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:12:32 compute-0 nova_compute[189608]: 2025-11-24 22:12:32.956 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:33 compute-0 nova_compute[189608]: 2025-11-24 22:12:33.331 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:34 compute-0 podman[243333]: 2025-11-24 22:12:34.537819693 +0000 UTC m=+0.084073257 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:12:37 compute-0 nova_compute[189608]: 2025-11-24 22:12:37.962 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:38 compute-0 nova_compute[189608]: 2025-11-24 22:12:38.336 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:40 compute-0 podman[243356]: 2025-11-24 22:12:40.543009045 +0000 UTC m=+0.094209073 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 22:12:42 compute-0 nova_compute[189608]: 2025-11-24 22:12:42.967 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:43 compute-0 nova_compute[189608]: 2025-11-24 22:12:43.337 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:44 compute-0 podman[243376]: 2025-11-24 22:12:44.561179143 +0000 UTC m=+0.113119510 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 24 22:12:47 compute-0 nova_compute[189608]: 2025-11-24 22:12:47.970 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:48 compute-0 nova_compute[189608]: 2025-11-24 22:12:48.341 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:48 compute-0 podman[243395]: 2025-11-24 22:12:48.556955716 +0000 UTC m=+0.112773360 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.buildah.version=1.29.0)
Nov 24 22:12:48 compute-0 podman[243396]: 2025-11-24 22:12:48.580127397 +0000 UTC m=+0.113955127 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, version=9.6, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 24 22:12:48 compute-0 podman[243397]: 2025-11-24 22:12:48.586091263 +0000 UTC m=+0.127397286 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm)
Nov 24 22:12:52 compute-0 nova_compute[189608]: 2025-11-24 22:12:52.973 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:53 compute-0 nova_compute[189608]: 2025-11-24 22:12:53.344 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:53 compute-0 podman[243456]: 2025-11-24 22:12:53.567687326 +0000 UTC m=+0.109772516 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:12:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:12:54.568 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:12:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:12:54.570 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:12:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:12:54.571 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:12:56 compute-0 podman[243480]: 2025-11-24 22:12:56.547906042 +0000 UTC m=+0.106116753 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 22:12:56 compute-0 podman[243479]: 2025-11-24 22:12:56.605118111 +0000 UTC m=+0.161121313 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Nov 24 22:12:57 compute-0 nova_compute[189608]: 2025-11-24 22:12:57.977 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:58 compute-0 nova_compute[189608]: 2025-11-24 22:12:58.345 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:12:59 compute-0 podman[203795]: time="2025-11-24T22:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:12:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:12:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Nov 24 22:13:01 compute-0 openstack_network_exporter[205945]: ERROR   22:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:13:01 compute-0 openstack_network_exporter[205945]: ERROR   22:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:13:01 compute-0 openstack_network_exporter[205945]: ERROR   22:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:13:01 compute-0 openstack_network_exporter[205945]: ERROR   22:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:13:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:13:01 compute-0 openstack_network_exporter[205945]: ERROR   22:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:13:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:13:02 compute-0 nova_compute[189608]: 2025-11-24 22:13:02.981 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:03 compute-0 nova_compute[189608]: 2025-11-24 22:13:03.348 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:05 compute-0 podman[243520]: 2025-11-24 22:13:05.564815598 +0000 UTC m=+0.106089011 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:13:07 compute-0 nova_compute[189608]: 2025-11-24 22:13:07.985 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:08 compute-0 nova_compute[189608]: 2025-11-24 22:13:08.353 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:11 compute-0 sshd-session[243544]: Invalid user sol from 45.148.10.240 port 59526
Nov 24 22:13:11 compute-0 podman[243546]: 2025-11-24 22:13:11.568638187 +0000 UTC m=+0.109027503 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 22:13:11 compute-0 sshd-session[243544]: Connection closed by invalid user sol 45.148.10.240 port 59526 [preauth]
Nov 24 22:13:11 compute-0 nova_compute[189608]: 2025-11-24 22:13:11.790 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:13:12 compute-0 nova_compute[189608]: 2025-11-24 22:13:12.990 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:13 compute-0 nova_compute[189608]: 2025-11-24 22:13:13.359 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:13 compute-0 nova_compute[189608]: 2025-11-24 22:13:13.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:13:13 compute-0 nova_compute[189608]: 2025-11-24 22:13:13.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:13:14 compute-0 nova_compute[189608]: 2025-11-24 22:13:14.275 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:13:14 compute-0 nova_compute[189608]: 2025-11-24 22:13:14.276 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:13:14 compute-0 nova_compute[189608]: 2025-11-24 22:13:14.276 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:13:14 compute-0 podman[243565]: 2025-11-24 22:13:14.834294523 +0000 UTC m=+0.128682384 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:13:15 compute-0 nova_compute[189608]: 2025-11-24 22:13:15.809 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updating instance_info_cache with network_info: [{"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:13:15 compute-0 nova_compute[189608]: 2025-11-24 22:13:15.824 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:13:15 compute-0 nova_compute[189608]: 2025-11-24 22:13:15.825 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:13:17 compute-0 nova_compute[189608]: 2025-11-24 22:13:17.995 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:18 compute-0 nova_compute[189608]: 2025-11-24 22:13:18.363 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:18 compute-0 nova_compute[189608]: 2025-11-24 22:13:18.539 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:18.537 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:13:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:18.543 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:13:19 compute-0 podman[243586]: 2025-11-24 22:13:19.536326529 +0000 UTC m=+0.092277142 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, com.redhat.component=ubi9-minimal-container)
Nov 24 22:13:19 compute-0 podman[243585]: 2025-11-24 22:13:19.542831112 +0000 UTC m=+0.102546161 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1214.1726694543, distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container)
Nov 24 22:13:19 compute-0 podman[243587]: 2025-11-24 22:13:19.564154425 +0000 UTC m=+0.112507062 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 22:13:19 compute-0 nova_compute[189608]: 2025-11-24 22:13:19.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:13:19 compute-0 nova_compute[189608]: 2025-11-24 22:13:19.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:13:19 compute-0 nova_compute[189608]: 2025-11-24 22:13:19.821 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:13:19 compute-0 nova_compute[189608]: 2025-11-24 22:13:19.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:13:19 compute-0 nova_compute[189608]: 2025-11-24 22:13:19.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:13:19 compute-0 nova_compute[189608]: 2025-11-24 22:13:19.823 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:13:19 compute-0 nova_compute[189608]: 2025-11-24 22:13:19.957 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.030 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.032 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.093 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.095 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.158 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.168 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.235 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.245 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.308 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.309 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.369 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.371 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.441 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.443 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.507 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.515 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.583 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.585 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.656 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.659 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.728 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.730 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:20 compute-0 nova_compute[189608]: 2025-11-24 22:13:20.798 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:21 compute-0 nova_compute[189608]: 2025-11-24 22:13:21.346 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:13:21 compute-0 nova_compute[189608]: 2025-11-24 22:13:21.349 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4782MB free_disk=72.16330337524414GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:13:21 compute-0 nova_compute[189608]: 2025-11-24 22:13:21.349 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:13:21 compute-0 nova_compute[189608]: 2025-11-24 22:13:21.350 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:13:21 compute-0 nova_compute[189608]: 2025-11-24 22:13:21.454 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:13:21 compute-0 nova_compute[189608]: 2025-11-24 22:13:21.455 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:13:21 compute-0 nova_compute[189608]: 2025-11-24 22:13:21.455 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 672e3ced-b18a-4ce7-aace-eb5c076ddb88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:13:21 compute-0 nova_compute[189608]: 2025-11-24 22:13:21.456 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:13:21 compute-0 nova_compute[189608]: 2025-11-24 22:13:21.457 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:13:21 compute-0 nova_compute[189608]: 2025-11-24 22:13:21.572 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:13:21 compute-0 nova_compute[189608]: 2025-11-24 22:13:21.592 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:13:21 compute-0 nova_compute[189608]: 2025-11-24 22:13:21.595 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:13:21 compute-0 nova_compute[189608]: 2025-11-24 22:13:21.596 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:13:23 compute-0 nova_compute[189608]: 2025-11-24 22:13:23.002 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:23 compute-0 nova_compute[189608]: 2025-11-24 22:13:23.365 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:23 compute-0 nova_compute[189608]: 2025-11-24 22:13:23.595 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:13:23 compute-0 nova_compute[189608]: 2025-11-24 22:13:23.597 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:13:23 compute-0 nova_compute[189608]: 2025-11-24 22:13:23.598 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:13:23 compute-0 nova_compute[189608]: 2025-11-24 22:13:23.599 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:13:24 compute-0 podman[243681]: 2025-11-24 22:13:24.522207617 +0000 UTC m=+0.077884664 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 24 22:13:24 compute-0 nova_compute[189608]: 2025-11-24 22:13:24.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:13:24 compute-0 nova_compute[189608]: 2025-11-24 22:13:24.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:13:24 compute-0 nova_compute[189608]: 2025-11-24 22:13:24.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:13:25 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 24 22:13:26 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:26.546 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:13:27 compute-0 podman[243708]: 2025-11-24 22:13:27.589558413 +0000 UTC m=+0.123351779 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 24 22:13:27 compute-0 podman[243707]: 2025-11-24 22:13:27.589607215 +0000 UTC m=+0.142479965 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 22:13:27 compute-0 nova_compute[189608]: 2025-11-24 22:13:27.628 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "7e7d375c-a42c-41c5-934f-c46941a40067" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:13:27 compute-0 nova_compute[189608]: 2025-11-24 22:13:27.629 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:13:27 compute-0 nova_compute[189608]: 2025-11-24 22:13:27.647 189613 DEBUG nova.compute.manager [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:13:27 compute-0 nova_compute[189608]: 2025-11-24 22:13:27.716 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:13:27 compute-0 nova_compute[189608]: 2025-11-24 22:13:27.717 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:13:27 compute-0 nova_compute[189608]: 2025-11-24 22:13:27.725 189613 DEBUG nova.virt.hardware [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:13:27 compute-0 nova_compute[189608]: 2025-11-24 22:13:27.726 189613 INFO nova.compute.claims [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:13:27 compute-0 nova_compute[189608]: 2025-11-24 22:13:27.886 189613 DEBUG nova.compute.provider_tree [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:13:27 compute-0 nova_compute[189608]: 2025-11-24 22:13:27.898 189613 DEBUG nova.scheduler.client.report [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:13:27 compute-0 nova_compute[189608]: 2025-11-24 22:13:27.923 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:13:27 compute-0 nova_compute[189608]: 2025-11-24 22:13:27.923 189613 DEBUG nova.compute.manager [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:13:27 compute-0 nova_compute[189608]: 2025-11-24 22:13:27.980 189613 DEBUG nova.compute.manager [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:13:27 compute-0 nova_compute[189608]: 2025-11-24 22:13:27.981 189613 DEBUG nova.network.neutron [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.006 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.011 189613 INFO nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.062 189613 DEBUG nova.compute.manager [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.180 189613 DEBUG nova.compute.manager [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.182 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.183 189613 INFO nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Creating image(s)
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.185 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "/var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.186 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.188 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.217 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.319 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.320 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.321 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.336 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.368 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.401 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.402 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed,backing_fmt=raw /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.445 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed,backing_fmt=raw /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.447 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "bc2e84058646d2c6ba728b20ebecd0301036e9ed" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.448 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.512 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.514 189613 DEBUG nova.virt.disk.api [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Checking if we can resize image /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.514 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.602 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.604 189613 DEBUG nova.virt.disk.api [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Cannot resize image /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.605 189613 DEBUG nova.objects.instance [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'migration_context' on Instance uuid 7e7d375c-a42c-41c5-934f-c46941a40067 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.619 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "/var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.620 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.621 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.637 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.717 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.719 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.720 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.734 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.792 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.794 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.859 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 1073741824" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.861 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.861 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.947 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.949 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.949 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Ensure instance console log exists: /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.950 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.950 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:13:28 compute-0 nova_compute[189608]: 2025-11-24 22:13:28.951 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:13:29 compute-0 podman[203795]: time="2025-11-24T22:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:13:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:13:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 24 22:13:30 compute-0 nova_compute[189608]: 2025-11-24 22:13:30.456 189613 DEBUG nova.network.neutron [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Successfully updated port: 482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:13:30 compute-0 nova_compute[189608]: 2025-11-24 22:13:30.473 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:13:30 compute-0 nova_compute[189608]: 2025-11-24 22:13:30.474 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquired lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:13:30 compute-0 nova_compute[189608]: 2025-11-24 22:13:30.474 189613 DEBUG nova.network.neutron [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:13:30 compute-0 nova_compute[189608]: 2025-11-24 22:13:30.566 189613 DEBUG nova.compute.manager [req-ab8500bf-1d31-4621-a863-088b3b338226 req-b82d8d1d-89c6-40d4-a63c-ccd8fe27b503 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Received event network-changed-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:13:30 compute-0 nova_compute[189608]: 2025-11-24 22:13:30.567 189613 DEBUG nova.compute.manager [req-ab8500bf-1d31-4621-a863-088b3b338226 req-b82d8d1d-89c6-40d4-a63c-ccd8fe27b503 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Refreshing instance network info cache due to event network-changed-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:13:30 compute-0 nova_compute[189608]: 2025-11-24 22:13:30.567 189613 DEBUG oslo_concurrency.lockutils [req-ab8500bf-1d31-4621-a863-088b3b338226 req-b82d8d1d-89c6-40d4-a63c-ccd8fe27b503 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:13:31 compute-0 nova_compute[189608]: 2025-11-24 22:13:31.259 189613 DEBUG nova.network.neutron [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:13:31 compute-0 openstack_network_exporter[205945]: ERROR   22:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:13:31 compute-0 openstack_network_exporter[205945]: ERROR   22:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:13:31 compute-0 openstack_network_exporter[205945]: ERROR   22:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:13:31 compute-0 openstack_network_exporter[205945]: ERROR   22:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:13:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:13:31 compute-0 openstack_network_exporter[205945]: ERROR   22:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:13:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.720 189613 DEBUG nova.network.neutron [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Updating instance_info_cache with network_info: [{"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.749 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Releasing lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.750 189613 DEBUG nova.compute.manager [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Instance network_info: |[{"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.751 189613 DEBUG oslo_concurrency.lockutils [req-ab8500bf-1d31-4621-a863-088b3b338226 req-b82d8d1d-89c6-40d4-a63c-ccd8fe27b503 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.751 189613 DEBUG nova.network.neutron [req-ab8500bf-1d31-4621-a863-088b3b338226 req-b82d8d1d-89c6-40d4-a63c-ccd8fe27b503 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Refreshing network info cache for port 482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.754 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Start _get_guest_xml network_info=[{"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-24T22:03:25Z,direct_url=<?>,disk_format='qcow2',id=a63b9561-12dc-4c11-858f-aa6fafbed036,min_disk=0,min_ram=0,name='cirros',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-24T22:03:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}], 'ephemerals': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 1, 'encryption_format': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.766 189613 WARNING nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.781 189613 DEBUG nova.virt.libvirt.host [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.782 189613 DEBUG nova.virt.libvirt.host [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.789 189613 DEBUG nova.virt.libvirt.host [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.790 189613 DEBUG nova.virt.libvirt.host [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.790 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.791 189613 DEBUG nova.virt.hardware [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:03:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-24T22:03:25Z,direct_url=<?>,disk_format='qcow2',id=a63b9561-12dc-4c11-858f-aa6fafbed036,min_disk=0,min_ram=0,name='cirros',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-24T22:03:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.792 189613 DEBUG nova.virt.hardware [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.792 189613 DEBUG nova.virt.hardware [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.792 189613 DEBUG nova.virt.hardware [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.793 189613 DEBUG nova.virt.hardware [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.793 189613 DEBUG nova.virt.hardware [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.794 189613 DEBUG nova.virt.hardware [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.794 189613 DEBUG nova.virt.hardware [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.795 189613 DEBUG nova.virt.hardware [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.795 189613 DEBUG nova.virt.hardware [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.795 189613 DEBUG nova.virt.hardware [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.800 189613 DEBUG nova.virt.libvirt.vif [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:13:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs',id=4,image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b438824c-ce52-4539-9db6-355e0ca018db'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='309342b7e3e849b2a5dd56651d8fa068',ramdisk_id='',reservation_id='r-anpt6b1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:13:28Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03OTgzMDI3NDg5Mzk1MTMwMDY5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc5ODMwMjc0ODkzOTUxMzAwNjk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Nzk4MzAyNzQ4OTM5NTEzMDA2OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc5ODMwMjc0ODkzOTUxMzAwNjk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03OTgzMDI3NDg5Mzk1MTMwMDY5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03OTgzMDI3NDg5Mzk1MTMwMDY5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Nov 24 22:13:32 compute-0 nova_compute[189608]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Nzk4MzAyNzQ4OTM5NTEzMDA2OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc5ODMwMjc0ODkzOTUxMzAwNjk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03OTgzMDI3NDg5Mzk1MTMwMDY5PT0tLQo=',user_id='572aaac113f54af8a894707849aed6bf',uuid=7e7d375c-a42c-41c5-934f-c46941a40067,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.801 189613 DEBUG nova.network.os_vif_util [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converting VIF {"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.802 189613 DEBUG nova.network.os_vif_util [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:8e:1d,bridge_name='br-int',has_traffic_filtering=True,id=482c3cfb-c114-4d01-aa49-09b8d4fdaaa5,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap482c3cfb-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.803 189613 DEBUG nova.objects.instance [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e7d375c-a42c-41c5-934f-c46941a40067 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.815 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:13:32 compute-0 nova_compute[189608]:   <uuid>7e7d375c-a42c-41c5-934f-c46941a40067</uuid>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   <name>instance-00000004</name>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   <memory>524288</memory>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <nova:name>vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs</nova:name>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:13:32</nova:creationTime>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <nova:flavor name="m1.small">
Nov 24 22:13:32 compute-0 nova_compute[189608]:         <nova:memory>512</nova:memory>
Nov 24 22:13:32 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:13:32 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:13:32 compute-0 nova_compute[189608]:         <nova:ephemeral>1</nova:ephemeral>
Nov 24 22:13:32 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:13:32 compute-0 nova_compute[189608]:         <nova:user uuid="572aaac113f54af8a894707849aed6bf">admin</nova:user>
Nov 24 22:13:32 compute-0 nova_compute[189608]:         <nova:project uuid="309342b7e3e849b2a5dd56651d8fa068">admin</nova:project>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="a63b9561-12dc-4c11-858f-aa6fafbed036"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:13:32 compute-0 nova_compute[189608]:         <nova:port uuid="482c3cfb-c114-4d01-aa49-09b8d4fdaaa5">
Nov 24 22:13:32 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="192.168.0.49" ipVersion="4"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <system>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <entry name="serial">7e7d375c-a42c-41c5-934f-c46941a40067</entry>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <entry name="uuid">7e7d375c-a42c-41c5-934f-c46941a40067</entry>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     </system>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   <os>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   </os>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   <features>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   </features>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <target dev="vdb" bus="virtio"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.config"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:b3:8e:1d"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <target dev="tap482c3cfb-c1"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/console.log" append="off"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <video>
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     </video>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:13:32 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:13:32 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:13:32 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:13:32 compute-0 nova_compute[189608]: </domain>
Nov 24 22:13:32 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.817 189613 DEBUG nova.compute.manager [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Preparing to wait for external event network-vif-plugged-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.817 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.817 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.818 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.818 189613 DEBUG nova.virt.libvirt.vif [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:13:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs',id=4,image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b438824c-ce52-4539-9db6-355e0ca018db'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='309342b7e3e849b2a5dd56651d8fa068',ramdisk_id='',reservation_id='r-anpt6b1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:13:28Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03OTgzMDI3NDg5Mzk1MTMwMDY5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc5ODMwMjc0ODkzOTUxMzAwNjk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Nzk4MzAyNzQ4OTM5NTEzMDA2OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc5ODMwMjc0ODkzOTUxMzAwNjk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03OTgzMDI3NDg5Mzk1MTMwMDY5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03OTgzMDI3NDg5Mzk1MTMwMDY5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Nov 24 22:13:32 compute-0 nova_compute[189608]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Nzk4MzAyNzQ4OTM5NTEzMDA2OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc5ODMwMjc0ODkzOTUxMzAwNjk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03OTgzMDI3NDg5Mzk1MTMwMDY5PT0tLQo=',user_id='572aaac113f54af8a894707849aed6bf',uuid=7e7d375c-a42c-41c5-934f-c46941a40067,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.819 189613 DEBUG nova.network.os_vif_util [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converting VIF {"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.820 189613 DEBUG nova.network.os_vif_util [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:8e:1d,bridge_name='br-int',has_traffic_filtering=True,id=482c3cfb-c114-4d01-aa49-09b8d4fdaaa5,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap482c3cfb-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.820 189613 DEBUG os_vif [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:8e:1d,bridge_name='br-int',has_traffic_filtering=True,id=482c3cfb-c114-4d01-aa49-09b8d4fdaaa5,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap482c3cfb-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.821 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.822 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.822 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.826 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.827 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap482c3cfb-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.827 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap482c3cfb-c1, col_values=(('external_ids', {'iface-id': '482c3cfb-c114-4d01-aa49-09b8d4fdaaa5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:8e:1d', 'vm-uuid': '7e7d375c-a42c-41c5-934f-c46941a40067'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:13:32 compute-0 NetworkManager[56413]: <info>  [1764022412.8306] manager: (tap482c3cfb-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.833 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.843 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.845 189613 INFO os_vif [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:8e:1d,bridge_name='br-int',has_traffic_filtering=True,id=482c3cfb-c114-4d01-aa49-09b8d4fdaaa5,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap482c3cfb-c1')
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.902 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.903 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.903 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.903 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No VIF found with MAC fa:16:3e:b3:8e:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:13:32 compute-0 nova_compute[189608]: 2025-11-24 22:13:32.904 189613 INFO nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Using config drive
Nov 24 22:13:33 compute-0 rsyslogd[237036]: message too long (8192) with configured size 8096, begin of message is: 2025-11-24 22:13:32.800 189613 DEBUG nova.virt.libvirt.vif [None req-9b6c4194-3c [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 24 22:13:33 compute-0 rsyslogd[237036]: message too long (8192) with configured size 8096, begin of message is: 2025-11-24 22:13:32.818 189613 DEBUG nova.virt.libvirt.vif [None req-9b6c4194-3c [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 24 22:13:33 compute-0 nova_compute[189608]: 2025-11-24 22:13:33.370 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:33 compute-0 nova_compute[189608]: 2025-11-24 22:13:33.473 189613 INFO nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Creating config drive at /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.config
Nov 24 22:13:33 compute-0 nova_compute[189608]: 2025-11-24 22:13:33.485 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx6ozq_22 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:13:33 compute-0 nova_compute[189608]: 2025-11-24 22:13:33.632 189613 DEBUG oslo_concurrency.processutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx6ozq_22" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:13:33 compute-0 kernel: tap482c3cfb-c1: entered promiscuous mode
Nov 24 22:13:33 compute-0 nova_compute[189608]: 2025-11-24 22:13:33.733 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:33 compute-0 NetworkManager[56413]: <info>  [1764022413.7346] manager: (tap482c3cfb-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 24 22:13:33 compute-0 ovn_controller[97889]: 2025-11-24T22:13:33Z|00045|binding|INFO|Claiming lport 482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 for this chassis.
Nov 24 22:13:33 compute-0 ovn_controller[97889]: 2025-11-24T22:13:33Z|00046|binding|INFO|482c3cfb-c114-4d01-aa49-09b8d4fdaaa5: Claiming fa:16:3e:b3:8e:1d 192.168.0.49
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.741 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:8e:1d 192.168.0.49'], port_security=['fa:16:3e:b3:8e:1d 192.168.0.49'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-mikdi7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-port-2mhfyk5gloal', 'neutron:cidrs': '192.168.0.49/24', 'neutron:device_id': '7e7d375c-a42c-41c5-934f-c46941a40067', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-mikdi7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-port-2mhfyk5gloal', 'neutron:project_id': '309342b7e3e849b2a5dd56651d8fa068', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c15160ba-5e91-4c5e-8c90-4266712c07a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b82cef4e-b720-4309-9703-51bb202fedba, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=482c3cfb-c114-4d01-aa49-09b8d4fdaaa5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.743 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 in datapath 1d1b3625-954d-4d8b-8b3f-323c25d9b42a bound to our chassis
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.744 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d1b3625-954d-4d8b-8b3f-323c25d9b42a
Nov 24 22:13:33 compute-0 ovn_controller[97889]: 2025-11-24T22:13:33Z|00047|binding|INFO|Setting lport 482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 ovn-installed in OVS
Nov 24 22:13:33 compute-0 ovn_controller[97889]: 2025-11-24T22:13:33Z|00048|binding|INFO|Setting lport 482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 up in Southbound
Nov 24 22:13:33 compute-0 nova_compute[189608]: 2025-11-24 22:13:33.752 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.769 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[70379b4d-dd5b-41fb-84de-8fc712d831af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:13:33 compute-0 systemd-udevd[243800]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:13:33 compute-0 systemd-machined[155884]: New machine qemu-4-instance-00000004.
Nov 24 22:13:33 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 24 22:13:33 compute-0 NetworkManager[56413]: <info>  [1764022413.8014] device (tap482c3cfb-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:13:33 compute-0 NetworkManager[56413]: <info>  [1764022413.8065] device (tap482c3cfb-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.820 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[69075d6b-9681-438e-b3e7-698e8ae78661]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.826 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[28b616c6-8542-49de-833e-56e2d12f500c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.864 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[a8033cfc-d373-44c6-9c8b-36971fda91b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.882 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[e593b9d7-c1f6-4e24-9548-18a413d24a2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d1b3625-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:e7:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373788, 'reachable_time': 34149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243810, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.899 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[6e40b6e8-9110-4e38-953b-d72cd081ce86]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1d1b3625-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373805, 'tstamp': 373805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243813, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap1d1b3625-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373810, 'tstamp': 373810}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243813, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.900 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d1b3625-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:13:33 compute-0 nova_compute[189608]: 2025-11-24 22:13:33.902 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:33 compute-0 nova_compute[189608]: 2025-11-24 22:13:33.904 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.904 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d1b3625-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.905 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.905 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d1b3625-90, col_values=(('external_ids', {'iface-id': '13073b7d-8165-42cd-87f4-fb1eb15a5b94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:13:33 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:33.906 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.188 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764022414.187789, 7e7d375c-a42c-41c5-934f-c46941a40067 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.188 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] VM Started (Lifecycle Event)
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.251 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.258 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764022414.1879663, 7e7d375c-a42c-41c5-934f-c46941a40067 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.259 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] VM Paused (Lifecycle Event)
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.281 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.296 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.319 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.390 189613 DEBUG nova.compute.manager [req-73e5b9d6-2d89-4d92-b05a-81bfa69a6deb req-f17fdd7a-b68c-4214-9a86-2fbe7988a6b2 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Received event network-vif-plugged-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.390 189613 DEBUG oslo_concurrency.lockutils [req-73e5b9d6-2d89-4d92-b05a-81bfa69a6deb req-f17fdd7a-b68c-4214-9a86-2fbe7988a6b2 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.391 189613 DEBUG oslo_concurrency.lockutils [req-73e5b9d6-2d89-4d92-b05a-81bfa69a6deb req-f17fdd7a-b68c-4214-9a86-2fbe7988a6b2 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.391 189613 DEBUG oslo_concurrency.lockutils [req-73e5b9d6-2d89-4d92-b05a-81bfa69a6deb req-f17fdd7a-b68c-4214-9a86-2fbe7988a6b2 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.391 189613 DEBUG nova.compute.manager [req-73e5b9d6-2d89-4d92-b05a-81bfa69a6deb req-f17fdd7a-b68c-4214-9a86-2fbe7988a6b2 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Processing event network-vif-plugged-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.392 189613 DEBUG nova.compute.manager [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.404 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764022414.4034338, 7e7d375c-a42c-41c5-934f-c46941a40067 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.405 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] VM Resumed (Lifecycle Event)
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.407 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.411 189613 INFO nova.virt.libvirt.driver [-] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Instance spawned successfully.
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.412 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.430 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.442 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.453 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.453 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.454 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.454 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.454 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.455 189613 DEBUG nova.virt.libvirt.driver [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.459 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.515 189613 INFO nova.compute.manager [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Took 6.33 seconds to spawn the instance on the hypervisor.
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.516 189613 DEBUG nova.compute.manager [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.591 189613 INFO nova.compute.manager [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Took 6.90 seconds to build instance.
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.612 189613 DEBUG oslo_concurrency.lockutils [None req-9b6c4194-3c2a-4125-8b16-031294060766 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:13:34 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 24 22:13:34 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.717 189613 DEBUG nova.network.neutron [req-ab8500bf-1d31-4621-a863-088b3b338226 req-b82d8d1d-89c6-40d4-a63c-ccd8fe27b503 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Updated VIF entry in instance network info cache for port 482c3cfb-c114-4d01-aa49-09b8d4fdaaa5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.719 189613 DEBUG nova.network.neutron [req-ab8500bf-1d31-4621-a863-088b3b338226 req-b82d8d1d-89c6-40d4-a63c-ccd8fe27b503 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Updating instance_info_cache with network_info: [{"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:13:34 compute-0 nova_compute[189608]: 2025-11-24 22:13:34.742 189613 DEBUG oslo_concurrency.lockutils [req-ab8500bf-1d31-4621-a863-088b3b338226 req-b82d8d1d-89c6-40d4-a63c-ccd8fe27b503 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:13:36 compute-0 nova_compute[189608]: 2025-11-24 22:13:36.462 189613 DEBUG nova.compute.manager [req-f2450489-9937-4a2d-ae00-8a191ac0594a req-1f7916f3-529e-467a-82c9-983187c34280 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Received event network-vif-plugged-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:13:36 compute-0 nova_compute[189608]: 2025-11-24 22:13:36.463 189613 DEBUG oslo_concurrency.lockutils [req-f2450489-9937-4a2d-ae00-8a191ac0594a req-1f7916f3-529e-467a-82c9-983187c34280 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:13:36 compute-0 nova_compute[189608]: 2025-11-24 22:13:36.463 189613 DEBUG oslo_concurrency.lockutils [req-f2450489-9937-4a2d-ae00-8a191ac0594a req-1f7916f3-529e-467a-82c9-983187c34280 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:13:36 compute-0 nova_compute[189608]: 2025-11-24 22:13:36.463 189613 DEBUG oslo_concurrency.lockutils [req-f2450489-9937-4a2d-ae00-8a191ac0594a req-1f7916f3-529e-467a-82c9-983187c34280 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:13:36 compute-0 nova_compute[189608]: 2025-11-24 22:13:36.463 189613 DEBUG nova.compute.manager [req-f2450489-9937-4a2d-ae00-8a191ac0594a req-1f7916f3-529e-467a-82c9-983187c34280 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] No waiting events found dispatching network-vif-plugged-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:13:36 compute-0 nova_compute[189608]: 2025-11-24 22:13:36.464 189613 WARNING nova.compute.manager [req-f2450489-9937-4a2d-ae00-8a191ac0594a req-1f7916f3-529e-467a-82c9-983187c34280 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Received unexpected event network-vif-plugged-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 for instance with vm_state active and task_state None.
Nov 24 22:13:36 compute-0 podman[243841]: 2025-11-24 22:13:36.591346009 +0000 UTC m=+0.125545617 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:13:37 compute-0 nova_compute[189608]: 2025-11-24 22:13:37.830 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:38 compute-0 nova_compute[189608]: 2025-11-24 22:13:38.374 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:42 compute-0 podman[243867]: 2025-11-24 22:13:42.578945913 +0000 UTC m=+0.114917107 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:13:42 compute-0 nova_compute[189608]: 2025-11-24 22:13:42.835 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:43 compute-0 nova_compute[189608]: 2025-11-24 22:13:43.377 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:45 compute-0 podman[243887]: 2025-11-24 22:13:45.57467806 +0000 UTC m=+0.118360502 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:13:47 compute-0 nova_compute[189608]: 2025-11-24 22:13:47.840 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:48 compute-0 nova_compute[189608]: 2025-11-24 22:13:48.379 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:50 compute-0 podman[243909]: 2025-11-24 22:13:50.563614213 +0000 UTC m=+0.122060248 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_id=edpm, version=9.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 24 22:13:50 compute-0 podman[243910]: 2025-11-24 22:13:50.570024243 +0000 UTC m=+0.111260862 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, distribution-scope=public, vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Nov 24 22:13:50 compute-0 podman[243911]: 2025-11-24 22:13:50.572784849 +0000 UTC m=+0.108070453 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 22:13:52 compute-0 nova_compute[189608]: 2025-11-24 22:13:52.844 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:53 compute-0 nova_compute[189608]: 2025-11-24 22:13:53.384 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:54.570 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:13:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:54.571 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:13:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:13:54.571 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:13:55 compute-0 podman[243962]: 2025-11-24 22:13:55.58705374 +0000 UTC m=+0.130775280 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:13:57 compute-0 nova_compute[189608]: 2025-11-24 22:13:57.849 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:58 compute-0 nova_compute[189608]: 2025-11-24 22:13:58.390 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:13:58 compute-0 podman[243987]: 2025-11-24 22:13:58.538713636 +0000 UTC m=+0.079080431 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:13:58 compute-0 podman[243986]: 2025-11-24 22:13:58.5780487 +0000 UTC m=+0.120956984 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 22:13:59 compute-0 podman[203795]: time="2025-11-24T22:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:13:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:13:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Nov 24 22:14:01 compute-0 openstack_network_exporter[205945]: ERROR   22:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:14:01 compute-0 openstack_network_exporter[205945]: ERROR   22:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:14:01 compute-0 openstack_network_exporter[205945]: ERROR   22:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:14:01 compute-0 openstack_network_exporter[205945]: ERROR   22:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:14:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:14:01 compute-0 openstack_network_exporter[205945]: ERROR   22:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:14:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:14:02 compute-0 nova_compute[189608]: 2025-11-24 22:14:02.854 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:03 compute-0 nova_compute[189608]: 2025-11-24 22:14:03.392 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:03 compute-0 ovn_controller[97889]: 2025-11-24T22:14:03Z|00049|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 24 22:14:07 compute-0 podman[244029]: 2025-11-24 22:14:07.532021398 +0000 UTC m=+0.088939038 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:14:07 compute-0 nova_compute[189608]: 2025-11-24 22:14:07.860 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:08 compute-0 nova_compute[189608]: 2025-11-24 22:14:08.396 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:09 compute-0 ovn_controller[97889]: 2025-11-24T22:14:09Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:8e:1d 192.168.0.49
Nov 24 22:14:09 compute-0 ovn_controller[97889]: 2025-11-24T22:14:09Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:8e:1d 192.168.0.49
Nov 24 22:14:12 compute-0 nova_compute[189608]: 2025-11-24 22:14:12.863 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:13 compute-0 nova_compute[189608]: 2025-11-24 22:14:13.399 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:13 compute-0 podman[244061]: 2025-11-24 22:14:13.551820935 +0000 UTC m=+0.107483676 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:14:13 compute-0 nova_compute[189608]: 2025-11-24 22:14:13.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:14:13 compute-0 nova_compute[189608]: 2025-11-24 22:14:13.795 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:14:14 compute-0 nova_compute[189608]: 2025-11-24 22:14:14.274 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:14:14 compute-0 nova_compute[189608]: 2025-11-24 22:14:14.275 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:14:14 compute-0 nova_compute[189608]: 2025-11-24 22:14:14.275 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:14:15 compute-0 nova_compute[189608]: 2025-11-24 22:14:15.672 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Updating instance_info_cache with network_info: [{"id": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "address": "fa:16:3e:04:76:e6", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2301b73c-6b", "ovs_interfaceid": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:14:15 compute-0 nova_compute[189608]: 2025-11-24 22:14:15.691 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:14:15 compute-0 nova_compute[189608]: 2025-11-24 22:14:15.691 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:14:16 compute-0 podman[244082]: 2025-11-24 22:14:16.5519771 +0000 UTC m=+0.100373684 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.623 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.624 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.624 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.633 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.633 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.633 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.633 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.634 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.640 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '672e3ced-b18a-4ce7-aace-eb5c076ddb88', 'name': 'vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.644 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 7e7d375c-a42c-41c5-934f-c46941a40067 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 24 22:14:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:17.645 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/7e7d375c-a42c-41c5-934f-c46941a40067 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}64060fdce0de69c84bc4ec1fbe9dbdfbc4dd57ffec34862b3445d9841d3967a2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 24 22:14:17 compute-0 nova_compute[189608]: 2025-11-24 22:14:17.865 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.370 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Mon, 24 Nov 2025 22:14:17 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-3f5eaaa7-ac3f-4b10-8db1-344325a5a28e x-openstack-request-id: req-3f5eaaa7-ac3f-4b10-8db1-344325a5a28e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.371 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "7e7d375c-a42c-41c5-934f-c46941a40067", "name": "vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs", "status": "ACTIVE", "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "user_id": "572aaac113f54af8a894707849aed6bf", "metadata": {"metering.server_group": "b438824c-ce52-4539-9db6-355e0ca018db"}, "hostId": "138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a", "image": {"id": "a63b9561-12dc-4c11-858f-aa6fafbed036", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/a63b9561-12dc-4c11-858f-aa6fafbed036"}]}, "flavor": {"id": "cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b"}]}, "created": "2025-11-24T22:13:24Z", "updated": "2025-11-24T22:13:34Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.49", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b3:8e:1d"}, {"version": 4, "addr": "192.168.122.183", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b3:8e:1d"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/7e7d375c-a42c-41c5-934f-c46941a40067"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/7e7d375c-a42c-41c5-934f-c46941a40067"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-24T22:13:34.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.371 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/7e7d375c-a42c-41c5-934f-c46941a40067 used request id req-3f5eaaa7-ac3f-4b10-8db1-344325a5a28e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.372 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7e7d375c-a42c-41c5-934f-c46941a40067', 'name': 'vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.377 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ea741b45-c6b4-41c0-a70f-c752b616faa2', 'name': 'test_0', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.381 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '828f9a8f-602f-4ad5-a0b0-5a48a328d20e', 'name': 'vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.382 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:14:18.382626) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.389 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.394 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 7e7d375c-a42c-41c5-934f-c46941a40067 / tap482c3cfb-c1 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.394 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.398 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 nova_compute[189608]: 2025-11-24 22:14:18.401 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.404 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.405 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.405 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.406 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.406 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.406 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.407 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:14:18.405661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:14:18.408129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.433 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.434 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.434 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.461 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.461 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.462 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.498 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.499 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.500 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.535 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.536 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.536 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.537 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:14:18.538494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.631 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.632 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.632 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.725 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.726 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.726 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.802 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.802 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.802 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.903 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.904 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.906 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.909 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.910 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.910 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.911 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.latency volume: 822487867 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.912 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.latency volume: 92574229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.913 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.latency volume: 84915884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.914 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 742675991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.914 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 148600369 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.916 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 107984847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:14:18.910978) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.917 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 667223647 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.917 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 118019506 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.918 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 84934831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.918 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 1140139100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.919 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 133972753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.920 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 92855613 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.921 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.921 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.922 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.922 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.922 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.923 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.924 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:14:18.923538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.957 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/cpu volume: 35620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:18.997 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/cpu volume: 34360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.027 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/cpu volume: 40570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.071 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/cpu volume: 377990000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.073 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:14:19.074127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.074 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.075 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.075 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.076 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.077 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.077 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.078 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.079 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.079 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.080 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.080 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.081 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.083 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.084 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.085 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:14:19.084306) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.085 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.085 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.086 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.086 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.087 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.087 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.087 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.088 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.088 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.088 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.089 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.090 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.090 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:14:19.090875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.091 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.bytes volume: 41844736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.091 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.092 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.092 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.093 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.093 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.093 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.094 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.094 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.094 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.095 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.095 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.096 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.096 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.097 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.097 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.097 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.097 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.latency volume: 2777922560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:14:19.097321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.098 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.latency volume: 9784234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.098 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.098 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 2375457166 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.099 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 18006599 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.099 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.100 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 2026985911 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.100 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 25037348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.100 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.101 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 1391550182 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.101 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 12781157 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.102 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.103 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.103 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.103 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.104 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:14:19.103932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.104 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.105 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.105 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.105 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.106 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.106 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.107 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.107 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.107 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 243 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.108 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.108 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.109 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.109 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.110 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.110 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.110 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.110 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.111 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.111 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.112 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.113 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:14:19.110256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.114 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.114 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.115 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:14:19.113936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.115 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.116 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.116 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.117 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:14:19.117121) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.117 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.118 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.118 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.119 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.119 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs>]
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.120 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.120 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.120 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.120 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-24T22:14:19.119071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:14:19.120905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.121 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.122 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.122 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.122 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.122 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.122 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.123 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:14:19.122299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.123 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.123 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.124 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.124 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.124 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.124 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:14:19.124585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.124 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.125 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.125 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.125 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.126 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.126 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.126 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.126 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.127 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.127 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:14:19.126572) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.127 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.128 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.128 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.128 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.129 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.129 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:14:19.128918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.129 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.bytes volume: 1666 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.129 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.129 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.bytes volume: 7700 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.130 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.130 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.130 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.130 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.130 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.bytes.delta volume: 380 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:14:19.130784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.131 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.131 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.131 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.bytes.delta volume: 2738 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.132 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.132 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.133 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/memory.usage volume: 49.546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.133 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:14:19.132712) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.133 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/memory.usage volume: 48.97265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.134 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.134 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.135 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-24T22:14:19.134824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.135 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs>]
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.135 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.135 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.135 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.135 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.135 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.136 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:14:19.135941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.136 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.136 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.137 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.137 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.137 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.137 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.137 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.137 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.137 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.138 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:14:19.137922) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.138 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.138 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.139 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.139 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.139 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.139 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.140 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.140 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.140 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.140 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.141 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.141 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.141 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.141 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.142 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.142 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.142 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.142 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.142 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.143 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:14:19.142229) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.143 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets volume: 68 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.143 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.144 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.144 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.145 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.145 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.145 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.145 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.145 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.145 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.145 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.146 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.146 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.146 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.146 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.146 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.146 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.146 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.147 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.147 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.147 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.147 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.147 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.147 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.148 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.148 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.148 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:14:19.148 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:14:19 compute-0 nova_compute[189608]: 2025-11-24 22:14:19.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:14:21 compute-0 podman[244104]: 2025-11-24 22:14:21.574676573 +0000 UTC m=+0.101830879 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 24 22:14:21 compute-0 podman[244103]: 2025-11-24 22:14:21.578240714 +0000 UTC m=+0.112450730 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, version=9.6, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 22:14:21 compute-0 podman[244102]: 2025-11-24 22:14:21.588614597 +0000 UTC m=+0.117758075 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, name=ubi9, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64)
Nov 24 22:14:21 compute-0 nova_compute[189608]: 2025-11-24 22:14:21.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:14:21 compute-0 nova_compute[189608]: 2025-11-24 22:14:21.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:14:21 compute-0 nova_compute[189608]: 2025-11-24 22:14:21.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:14:21 compute-0 nova_compute[189608]: 2025-11-24 22:14:21.820 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:14:21 compute-0 nova_compute[189608]: 2025-11-24 22:14:21.821 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:14:21 compute-0 nova_compute[189608]: 2025-11-24 22:14:21.821 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:14:21 compute-0 nova_compute[189608]: 2025-11-24 22:14:21.822 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:14:21 compute-0 nova_compute[189608]: 2025-11-24 22:14:21.958 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.027 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.029 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.103 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.105 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.166 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.168 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.266 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.277 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.364 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.366 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.455 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.457 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.520 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.521 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.582 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.588 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.669 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.671 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.728 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.729 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.785 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.786 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.845 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.857 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.880 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.931 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:22 compute-0 nova_compute[189608]: 2025-11-24 22:14:22.934 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.007 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.008 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.094 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.096 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.162 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.403 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.549 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.551 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4595MB free_disk=72.14066314697266GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.552 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.553 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.638 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.639 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.639 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 672e3ced-b18a-4ce7-aace-eb5c076ddb88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.640 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 7e7d375c-a42c-41c5-934f-c46941a40067 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.640 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.641 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.749 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.769 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.793 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:14:23 compute-0 nova_compute[189608]: 2025-11-24 22:14:23.793 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:14:24 compute-0 nova_compute[189608]: 2025-11-24 22:14:24.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:14:24 compute-0 nova_compute[189608]: 2025-11-24 22:14:24.796 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:14:26 compute-0 podman[244210]: 2025-11-24 22:14:26.526944055 +0000 UTC m=+0.070178295 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:14:26 compute-0 nova_compute[189608]: 2025-11-24 22:14:26.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:14:26 compute-0 nova_compute[189608]: 2025-11-24 22:14:26.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:14:26 compute-0 nova_compute[189608]: 2025-11-24 22:14:26.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:14:27 compute-0 nova_compute[189608]: 2025-11-24 22:14:27.886 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:28 compute-0 nova_compute[189608]: 2025-11-24 22:14:28.405 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:29 compute-0 podman[244237]: 2025-11-24 22:14:29.629676982 +0000 UTC m=+0.166625715 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:14:29 compute-0 podman[244236]: 2025-11-24 22:14:29.654233865 +0000 UTC m=+0.200273661 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 22:14:29 compute-0 podman[203795]: time="2025-11-24T22:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:14:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:14:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 24 22:14:31 compute-0 openstack_network_exporter[205945]: ERROR   22:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:14:31 compute-0 openstack_network_exporter[205945]: ERROR   22:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:14:31 compute-0 openstack_network_exporter[205945]: ERROR   22:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:14:31 compute-0 openstack_network_exporter[205945]: ERROR   22:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:14:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:14:31 compute-0 openstack_network_exporter[205945]: ERROR   22:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:14:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:14:32 compute-0 nova_compute[189608]: 2025-11-24 22:14:32.891 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:33 compute-0 nova_compute[189608]: 2025-11-24 22:14:33.407 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:37 compute-0 nova_compute[189608]: 2025-11-24 22:14:37.895 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:38 compute-0 podman[244281]: 2025-11-24 22:14:38.308072116 +0000 UTC m=+0.106243956 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:14:38 compute-0 nova_compute[189608]: 2025-11-24 22:14:38.410 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:42 compute-0 nova_compute[189608]: 2025-11-24 22:14:42.902 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:43 compute-0 nova_compute[189608]: 2025-11-24 22:14:43.414 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:44 compute-0 podman[244306]: 2025-11-24 22:14:44.544841543 +0000 UTC m=+0.093810679 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 24 22:14:47 compute-0 podman[244326]: 2025-11-24 22:14:47.580948497 +0000 UTC m=+0.136866829 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute)
Nov 24 22:14:47 compute-0 nova_compute[189608]: 2025-11-24 22:14:47.909 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:48 compute-0 nova_compute[189608]: 2025-11-24 22:14:48.417 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:52 compute-0 podman[244349]: 2025-11-24 22:14:52.567732503 +0000 UTC m=+0.111876062 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3)
Nov 24 22:14:52 compute-0 podman[244347]: 2025-11-24 22:14:52.578633702 +0000 UTC m=+0.123428471 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, container_name=kepler, io.openshift.expose-services=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm)
Nov 24 22:14:52 compute-0 podman[244348]: 2025-11-24 22:14:52.580143179 +0000 UTC m=+0.124810394 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, name=ubi9-minimal, container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 24 22:14:52 compute-0 nova_compute[189608]: 2025-11-24 22:14:52.913 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:53 compute-0 nova_compute[189608]: 2025-11-24 22:14:53.420 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:14:54.572 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:14:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:14:54.573 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:14:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:14:54.573 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:14:57 compute-0 podman[244405]: 2025-11-24 22:14:57.640476534 +0000 UTC m=+0.180372944 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 24 22:14:57 compute-0 nova_compute[189608]: 2025-11-24 22:14:57.916 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:58 compute-0 nova_compute[189608]: 2025-11-24 22:14:58.424 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:14:59 compute-0 podman[203795]: time="2025-11-24T22:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:14:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:14:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 24 22:15:00 compute-0 podman[244429]: 2025-11-24 22:15:00.583322675 +0000 UTC m=+0.122632157 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:15:00 compute-0 podman[244428]: 2025-11-24 22:15:00.612478542 +0000 UTC m=+0.164862450 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:15:01 compute-0 openstack_network_exporter[205945]: ERROR   22:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:15:01 compute-0 openstack_network_exporter[205945]: ERROR   22:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:15:01 compute-0 openstack_network_exporter[205945]: ERROR   22:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:15:01 compute-0 openstack_network_exporter[205945]: ERROR   22:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:15:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:15:01 compute-0 openstack_network_exporter[205945]: ERROR   22:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:15:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:15:02 compute-0 nova_compute[189608]: 2025-11-24 22:15:02.920 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:03 compute-0 nova_compute[189608]: 2025-11-24 22:15:03.426 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:07 compute-0 nova_compute[189608]: 2025-11-24 22:15:07.924 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:08 compute-0 nova_compute[189608]: 2025-11-24 22:15:08.429 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:08 compute-0 podman[244474]: 2025-11-24 22:15:08.571591948 +0000 UTC m=+0.127636802 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:15:10 compute-0 sshd-session[244498]: Invalid user solana from 45.148.10.240 port 48208
Nov 24 22:15:10 compute-0 sshd-session[244498]: Connection closed by invalid user solana 45.148.10.240 port 48208 [preauth]
Nov 24 22:15:11 compute-0 nova_compute[189608]: 2025-11-24 22:15:11.791 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:12 compute-0 nova_compute[189608]: 2025-11-24 22:15:12.928 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:13 compute-0 nova_compute[189608]: 2025-11-24 22:15:13.432 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:14 compute-0 nova_compute[189608]: 2025-11-24 22:15:14.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:14 compute-0 nova_compute[189608]: 2025-11-24 22:15:14.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:15:14 compute-0 nova_compute[189608]: 2025-11-24 22:15:14.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:15:14 compute-0 podman[244500]: 2025-11-24 22:15:14.812943298 +0000 UTC m=+0.089787845 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 22:15:15 compute-0 nova_compute[189608]: 2025-11-24 22:15:15.303 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:15:15 compute-0 nova_compute[189608]: 2025-11-24 22:15:15.304 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:15:15 compute-0 nova_compute[189608]: 2025-11-24 22:15:15.304 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:15:15 compute-0 nova_compute[189608]: 2025-11-24 22:15:15.305 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:15:16 compute-0 nova_compute[189608]: 2025-11-24 22:15:16.875 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updating instance_info_cache with network_info: [{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:15:16 compute-0 nova_compute[189608]: 2025-11-24 22:15:16.904 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:15:16 compute-0 nova_compute[189608]: 2025-11-24 22:15:16.904 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:15:17 compute-0 nova_compute[189608]: 2025-11-24 22:15:17.932 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:18 compute-0 nova_compute[189608]: 2025-11-24 22:15:18.434 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:18 compute-0 podman[244520]: 2025-11-24 22:15:18.545468978 +0000 UTC m=+0.106183154 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 22:15:21 compute-0 nova_compute[189608]: 2025-11-24 22:15:21.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:22 compute-0 sshd-session[244541]: Invalid user solv from 193.32.162.145 port 47776
Nov 24 22:15:22 compute-0 sshd-session[244541]: Connection closed by invalid user solv 193.32.162.145 port 47776 [preauth]
Nov 24 22:15:22 compute-0 nova_compute[189608]: 2025-11-24 22:15:22.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:22 compute-0 nova_compute[189608]: 2025-11-24 22:15:22.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:22 compute-0 nova_compute[189608]: 2025-11-24 22:15:22.796 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 22:15:22 compute-0 nova_compute[189608]: 2025-11-24 22:15:22.936 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:23 compute-0 nova_compute[189608]: 2025-11-24 22:15:23.438 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:23 compute-0 podman[244544]: 2025-11-24 22:15:23.585228763 +0000 UTC m=+0.121799000 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 24 22:15:23 compute-0 podman[244543]: 2025-11-24 22:15:23.596742471 +0000 UTC m=+0.137663144 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 24 22:15:23 compute-0 podman[244545]: 2025-11-24 22:15:23.629293853 +0000 UTC m=+0.159478653 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 24 22:15:23 compute-0 nova_compute[189608]: 2025-11-24 22:15:23.806 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:23 compute-0 nova_compute[189608]: 2025-11-24 22:15:23.806 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:23 compute-0 nova_compute[189608]: 2025-11-24 22:15:23.807 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:23 compute-0 nova_compute[189608]: 2025-11-24 22:15:23.861 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:15:23 compute-0 nova_compute[189608]: 2025-11-24 22:15:23.862 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:15:23 compute-0 nova_compute[189608]: 2025-11-24 22:15:23.862 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:15:23 compute-0 nova_compute[189608]: 2025-11-24 22:15:23.862 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:15:23 compute-0 nova_compute[189608]: 2025-11-24 22:15:23.981 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.072 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.075 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.132 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.134 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.214 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.216 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.275 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.281 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.344 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.345 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.417 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.419 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.480 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.482 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.545 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.555 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.613 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.615 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.694 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.695 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.786 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.788 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.870 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.879 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.965 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:24 compute-0 nova_compute[189608]: 2025-11-24 22:15:24.966 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.031 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.032 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.091 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.093 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.159 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.619 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.621 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4585MB free_disk=72.14069366455078GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.621 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.622 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.931 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.931 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.932 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 672e3ced-b18a-4ce7-aace-eb5c076ddb88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.932 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 7e7d375c-a42c-41c5-934f-c46941a40067 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.933 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.933 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:15:25 compute-0 nova_compute[189608]: 2025-11-24 22:15:25.995 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing inventories for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 22:15:26 compute-0 nova_compute[189608]: 2025-11-24 22:15:26.079 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating ProviderTree inventory for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 22:15:26 compute-0 nova_compute[189608]: 2025-11-24 22:15:26.080 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating inventory in ProviderTree for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 22:15:26 compute-0 nova_compute[189608]: 2025-11-24 22:15:26.103 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing aggregate associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 22:15:26 compute-0 nova_compute[189608]: 2025-11-24 22:15:26.131 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing trait associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, traits: HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 22:15:26 compute-0 nova_compute[189608]: 2025-11-24 22:15:26.257 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:15:26 compute-0 nova_compute[189608]: 2025-11-24 22:15:26.275 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:15:26 compute-0 nova_compute[189608]: 2025-11-24 22:15:26.278 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:15:26 compute-0 nova_compute[189608]: 2025-11-24 22:15:26.279 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:15:26 compute-0 nova_compute[189608]: 2025-11-24 22:15:26.280 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.278 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.278 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.314 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Triggering sync for uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.315 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Triggering sync for uuid 828f9a8f-602f-4ad5-a0b0-5a48a328d20e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.315 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Triggering sync for uuid 672e3ced-b18a-4ce7-aace-eb5c076ddb88 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.315 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Triggering sync for uuid 7e7d375c-a42c-41c5-934f-c46941a40067 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.316 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "ea741b45-c6b4-41c0-a70f-c752b616faa2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.316 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.316 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.317 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.318 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.318 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.319 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "7e7d375c-a42c-41c5-934f-c46941a40067" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.319 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "7e7d375c-a42c-41c5-934f-c46941a40067" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.319 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.319 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.441 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.442 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.462 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.464 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "7e7d375c-a42c-41c5-934f-c46941a40067" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.836 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:27 compute-0 nova_compute[189608]: 2025-11-24 22:15:27.941 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:28 compute-0 nova_compute[189608]: 2025-11-24 22:15:28.441 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:28 compute-0 podman[244649]: 2025-11-24 22:15:28.575965022 +0000 UTC m=+0.115892347 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:15:29 compute-0 podman[203795]: time="2025-11-24T22:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:15:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:15:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 24 22:15:31 compute-0 openstack_network_exporter[205945]: ERROR   22:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:15:31 compute-0 openstack_network_exporter[205945]: ERROR   22:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:15:31 compute-0 openstack_network_exporter[205945]: ERROR   22:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:15:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:15:31 compute-0 openstack_network_exporter[205945]: ERROR   22:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:15:31 compute-0 openstack_network_exporter[205945]: ERROR   22:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:15:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:15:31 compute-0 podman[244675]: 2025-11-24 22:15:31.503525768 +0000 UTC m=+0.062587948 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Nov 24 22:15:31 compute-0 podman[244674]: 2025-11-24 22:15:31.541918332 +0000 UTC m=+0.105421110 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 24 22:15:32 compute-0 nova_compute[189608]: 2025-11-24 22:15:32.946 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:33 compute-0 nova_compute[189608]: 2025-11-24 22:15:33.444 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:33 compute-0 sshd-session[244673]: Invalid user a from 80.94.95.115 port 30372
Nov 24 22:15:33 compute-0 nova_compute[189608]: 2025-11-24 22:15:33.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:15:33 compute-0 nova_compute[189608]: 2025-11-24 22:15:33.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 22:15:33 compute-0 nova_compute[189608]: 2025-11-24 22:15:33.807 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 22:15:34 compute-0 sshd-session[244673]: Connection closed by invalid user a 80.94.95.115 port 30372 [preauth]
Nov 24 22:15:34 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 24 22:15:37 compute-0 nova_compute[189608]: 2025-11-24 22:15:37.948 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:38 compute-0 nova_compute[189608]: 2025-11-24 22:15:38.449 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:39 compute-0 podman[244719]: 2025-11-24 22:15:39.587614442 +0000 UTC m=+0.122742300 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:15:42 compute-0 nova_compute[189608]: 2025-11-24 22:15:42.952 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:43 compute-0 nova_compute[189608]: 2025-11-24 22:15:43.451 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:45 compute-0 podman[244741]: 2025-11-24 22:15:45.547328058 +0000 UTC m=+0.091583240 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 22:15:47 compute-0 nova_compute[189608]: 2025-11-24 22:15:47.955 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:48 compute-0 nova_compute[189608]: 2025-11-24 22:15:48.454 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:49 compute-0 podman[244760]: 2025-11-24 22:15:49.590849026 +0000 UTC m=+0.135641870 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:15:52 compute-0 nova_compute[189608]: 2025-11-24 22:15:52.959 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:53 compute-0 nova_compute[189608]: 2025-11-24 22:15:53.457 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:54 compute-0 podman[244781]: 2025-11-24 22:15:54.563635267 +0000 UTC m=+0.107197167 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, architecture=x86_64, container_name=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-type=git, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Nov 24 22:15:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:15:54.573 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:15:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:15:54.574 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:15:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:15:54.576 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:15:54 compute-0 podman[244783]: 2025-11-24 22:15:54.576558269 +0000 UTC m=+0.108309901 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:15:54 compute-0 podman[244782]: 2025-11-24 22:15:54.610648519 +0000 UTC m=+0.144212467 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, name=ubi9-minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Nov 24 22:15:57 compute-0 nova_compute[189608]: 2025-11-24 22:15:57.964 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:58 compute-0 nova_compute[189608]: 2025-11-24 22:15:58.460 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:15:59 compute-0 podman[244836]: 2025-11-24 22:15:59.522993369 +0000 UTC m=+0.075555002 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:15:59 compute-0 podman[203795]: time="2025-11-24T22:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:15:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:15:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 24 22:16:01 compute-0 openstack_network_exporter[205945]: ERROR   22:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:16:01 compute-0 openstack_network_exporter[205945]: ERROR   22:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:16:01 compute-0 openstack_network_exporter[205945]: ERROR   22:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:16:01 compute-0 openstack_network_exporter[205945]: ERROR   22:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:16:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:16:01 compute-0 openstack_network_exporter[205945]: ERROR   22:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:16:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:16:02 compute-0 podman[244861]: 2025-11-24 22:16:02.563068306 +0000 UTC m=+0.087181583 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 24 22:16:02 compute-0 podman[244860]: 2025-11-24 22:16:02.620106581 +0000 UTC m=+0.164546911 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 22:16:02 compute-0 nova_compute[189608]: 2025-11-24 22:16:02.967 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:03 compute-0 nova_compute[189608]: 2025-11-24 22:16:03.463 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:07 compute-0 nova_compute[189608]: 2025-11-24 22:16:07.970 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:08 compute-0 nova_compute[189608]: 2025-11-24 22:16:08.467 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:10 compute-0 podman[244904]: 2025-11-24 22:16:10.534262447 +0000 UTC m=+0.086816412 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:16:12 compute-0 nova_compute[189608]: 2025-11-24 22:16:12.974 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:13 compute-0 nova_compute[189608]: 2025-11-24 22:16:13.470 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:15 compute-0 nova_compute[189608]: 2025-11-24 22:16:15.807 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:16:15 compute-0 nova_compute[189608]: 2025-11-24 22:16:15.808 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:16:16 compute-0 nova_compute[189608]: 2025-11-24 22:16:16.330 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:16:16 compute-0 nova_compute[189608]: 2025-11-24 22:16:16.331 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:16:16 compute-0 nova_compute[189608]: 2025-11-24 22:16:16.331 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:16:16 compute-0 podman[244925]: 2025-11-24 22:16:16.537918208 +0000 UTC m=+0.086883950 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.624 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.625 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.633 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.633 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.633 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.634 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.634 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.635 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '672e3ced-b18a-4ce7-aace-eb5c076ddb88', 'name': 'vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.638 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7e7d375c-a42c-41c5-934f-c46941a40067', 'name': 'vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.642 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ea741b45-c6b4-41c0-a70f-c752b616faa2', 'name': 'test_0', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.645 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '828f9a8f-602f-4ad5-a0b0-5a48a328d20e', 'name': 'vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.646 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.646 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.646 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.646 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:16:17.646878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.652 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.658 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.663 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.668 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 nova_compute[189608]: 2025-11-24 22:16:17.669 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updating instance_info_cache with network_info: [{"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.670 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.671 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.671 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.671 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.671 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.671 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.671 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.672 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.672 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:16:17.671194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.672 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.672 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.673 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:16:17.673158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:17 compute-0 nova_compute[189608]: 2025-11-24 22:16:17.685 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:16:17 compute-0 nova_compute[189608]: 2025-11-24 22:16:17.685 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.706 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.706 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.707 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.745 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.745 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.746 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.782 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.783 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.783 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.815 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.816 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.817 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.818 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.819 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.819 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.819 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.819 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:16:17.819402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.936 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.938 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:17.940 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:17 compute-0 nova_compute[189608]: 2025-11-24 22:16:17.977 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.070 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.071 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.072 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.182 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.183 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.183 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.289 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.290 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.291 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.291 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.292 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.292 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.292 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.292 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.latency volume: 822487867 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:16:18.292496) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.294 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.latency volume: 92574229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.294 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.latency volume: 84915884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.295 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 742675991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.295 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 148600369 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.296 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 107984847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.296 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 667223647 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.297 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 118019506 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.297 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 84934831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.297 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 1140139100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.298 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 133972753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.298 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.latency volume: 92855613 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.299 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.299 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.300 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.300 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:16:18.300386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.342 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/cpu volume: 37360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.378 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/cpu volume: 36170000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.417 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/cpu volume: 42280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.452 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/cpu volume: 379670000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.453 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.453 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.453 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.454 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.454 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.454 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.454 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.455 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.455 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.455 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.455 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.456 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.456 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.456 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.457 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.457 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.457 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.458 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.458 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.459 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.459 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.459 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.459 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.460 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.460 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.460 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:16:18.454104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.461 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:16:18.459287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.461 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.461 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.462 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.462 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.462 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.462 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.463 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.463 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.463 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.464 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.464 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.bytes volume: 41844736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:16:18.464099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.464 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.465 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.465 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.465 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.466 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.466 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.466 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.467 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.467 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.467 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.467 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.468 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.469 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.latency volume: 2777922560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.469 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.latency volume: 9784234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.469 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:16:18.469143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.470 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 2394094265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.470 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 18006599 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.471 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.471 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 2026985911 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.471 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 25037348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.471 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.472 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 1391550182 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.472 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 12781157 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.472 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 nova_compute[189608]: 2025-11-24 22:16:18.473 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.475 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.475 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.475 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.475 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.475 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.475 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.475 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:16:18.475508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.476 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.476 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.476 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.477 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.477 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.477 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.478 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.478 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.478 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 243 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.478 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.479 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.480 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.480 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.480 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.480 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.480 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.481 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:16:18.480590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.481 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.481 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.481 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.482 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.482 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.482 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.482 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.483 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.bytes.delta volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.483 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:16:18.482810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.484 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.484 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.484 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.484 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.486 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.486 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.486 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:16:18.485190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.487 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.487 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.487 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.487 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.487 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.488 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.488 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:16:18.487506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.488 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.489 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.489 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:16:18.489187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.490 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.490 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.491 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.491 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.491 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.491 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.492 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.492 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.492 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.492 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.493 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:16:18.491997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.493 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.494 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.494 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.494 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.494 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.494 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.494 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.494 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.495 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.495 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.496 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.496 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:16:18.494455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.497 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.497 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.497 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.498 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.bytes volume: 7700 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:16:18.497096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.499 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.499 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.499 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.bytes.delta volume: 662 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.500 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.500 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.501 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.501 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:16:18.499538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.502 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.502 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:16:18.502161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.503 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.503 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/memory.usage volume: 48.97265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.504 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.504 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.504 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.505 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.505 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.505 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.506 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.507 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:16:18.504620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.507 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.507 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.508 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.508 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.508 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.508 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.509 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.509 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.509 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.510 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.510 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.511 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.511 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.511 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:16:18.506971) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.512 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.512 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.512 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.513 14 DEBUG ceilometer.compute.pollsters [-] 828f9a8f-602f-4ad5-a0b0-5a48a328d20e/network.outgoing.packets volume: 68 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:16:18.512098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:16:18.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:16:20 compute-0 podman[244947]: 2025-11-24 22:16:20.625274676 +0000 UTC m=+0.167989238 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 24 22:16:22 compute-0 nova_compute[189608]: 2025-11-24 22:16:22.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:16:22 compute-0 nova_compute[189608]: 2025-11-24 22:16:22.981 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:23 compute-0 nova_compute[189608]: 2025-11-24 22:16:23.476 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:23 compute-0 nova_compute[189608]: 2025-11-24 22:16:23.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:16:23 compute-0 nova_compute[189608]: 2025-11-24 22:16:23.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:16:23 compute-0 nova_compute[189608]: 2025-11-24 22:16:23.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:16:24 compute-0 nova_compute[189608]: 2025-11-24 22:16:24.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:16:24 compute-0 nova_compute[189608]: 2025-11-24 22:16:24.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:16:24 compute-0 nova_compute[189608]: 2025-11-24 22:16:24.823 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:16:24 compute-0 nova_compute[189608]: 2025-11-24 22:16:24.824 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:16:24 compute-0 nova_compute[189608]: 2025-11-24 22:16:24.825 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:16:24 compute-0 nova_compute[189608]: 2025-11-24 22:16:24.983 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.049 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.051 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.137 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.138 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.230 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.232 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.356 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.373 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.457 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.458 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.542 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.543 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:25 compute-0 podman[244980]: 2025-11-24 22:16:25.54817349 +0000 UTC m=+0.093191946 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.component=ubi9-container)
Nov 24 22:16:25 compute-0 podman[244982]: 2025-11-24 22:16:25.563806758 +0000 UTC m=+0.102184467 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 22:16:25 compute-0 podman[244981]: 2025-11-24 22:16:25.59791472 +0000 UTC m=+0.134359409 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, config_id=edpm, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter)
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.609 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.611 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.683 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.690 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.751 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.752 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.814 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.815 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.895 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.896 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:25 compute-0 nova_compute[189608]: 2025-11-24 22:16:25.989 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.000 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.065 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.066 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.165 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.167 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.260 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.261 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.325 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.768 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.770 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4576MB free_disk=72.13871383666992GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.770 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.770 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.855 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.855 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.856 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 672e3ced-b18a-4ce7-aace-eb5c076ddb88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.856 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 7e7d375c-a42c-41c5-934f-c46941a40067 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.857 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.857 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.949 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.965 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.967 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:16:26 compute-0 nova_compute[189608]: 2025-11-24 22:16:26.967 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:16:27 compute-0 nova_compute[189608]: 2025-11-24 22:16:27.967 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:16:27 compute-0 nova_compute[189608]: 2025-11-24 22:16:27.984 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:28 compute-0 nova_compute[189608]: 2025-11-24 22:16:28.480 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:28 compute-0 nova_compute[189608]: 2025-11-24 22:16:28.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:16:28 compute-0 nova_compute[189608]: 2025-11-24 22:16:28.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:16:29 compute-0 podman[203795]: time="2025-11-24T22:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:16:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:16:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 24 22:16:29 compute-0 nova_compute[189608]: 2025-11-24 22:16:29.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:16:30 compute-0 podman[245070]: 2025-11-24 22:16:30.578328527 +0000 UTC m=+0.122374326 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:16:31 compute-0 openstack_network_exporter[205945]: ERROR   22:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:16:31 compute-0 openstack_network_exporter[205945]: ERROR   22:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:16:31 compute-0 openstack_network_exporter[205945]: ERROR   22:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:16:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:16:31 compute-0 openstack_network_exporter[205945]: ERROR   22:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:16:31 compute-0 openstack_network_exporter[205945]: ERROR   22:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:16:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:16:32 compute-0 nova_compute[189608]: 2025-11-24 22:16:32.988 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:33 compute-0 nova_compute[189608]: 2025-11-24 22:16:33.485 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:33 compute-0 podman[245095]: 2025-11-24 22:16:33.567209575 +0000 UTC m=+0.108748490 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 22:16:33 compute-0 podman[245094]: 2025-11-24 22:16:33.611150665 +0000 UTC m=+0.150564074 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 22:16:37 compute-0 nova_compute[189608]: 2025-11-24 22:16:37.993 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:38 compute-0 nova_compute[189608]: 2025-11-24 22:16:38.487 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:41 compute-0 podman[245136]: 2025-11-24 22:16:41.565610145 +0000 UTC m=+0.105073797 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:16:42 compute-0 nova_compute[189608]: 2025-11-24 22:16:42.997 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:43 compute-0 nova_compute[189608]: 2025-11-24 22:16:43.490 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:47 compute-0 podman[245160]: 2025-11-24 22:16:47.571317673 +0000 UTC m=+0.114288444 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 22:16:48 compute-0 nova_compute[189608]: 2025-11-24 22:16:48.001 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:48 compute-0 nova_compute[189608]: 2025-11-24 22:16:48.492 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:51 compute-0 podman[245181]: 2025-11-24 22:16:51.59090213 +0000 UTC m=+0.128352782 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Nov 24 22:16:53 compute-0 nova_compute[189608]: 2025-11-24 22:16:53.006 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:53 compute-0 nova_compute[189608]: 2025-11-24 22:16:53.495 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:16:54.575 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:16:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:16:54.576 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:16:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:16:54.577 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:16:56 compute-0 podman[245200]: 2025-11-24 22:16:56.558989594 +0000 UTC m=+0.103713964 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, distribution-scope=public, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, io.buildah.version=1.29.0, name=ubi9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.expose-services=)
Nov 24 22:16:56 compute-0 podman[245201]: 2025-11-24 22:16:56.561861954 +0000 UTC m=+0.097853632 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-08-20T13:12:41, release=1755695350, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container)
Nov 24 22:16:56 compute-0 podman[245202]: 2025-11-24 22:16:56.575617332 +0000 UTC m=+0.096796168 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:16:58 compute-0 nova_compute[189608]: 2025-11-24 22:16:58.010 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:58 compute-0 nova_compute[189608]: 2025-11-24 22:16:58.497 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:16:59 compute-0 podman[203795]: time="2025-11-24T22:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:16:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:16:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 24 22:17:00 compute-0 sshd-session[245257]: Invalid user solana from 45.148.10.240 port 56436
Nov 24 22:17:00 compute-0 sshd-session[245257]: Connection closed by invalid user solana 45.148.10.240 port 56436 [preauth]
Nov 24 22:17:01 compute-0 openstack_network_exporter[205945]: ERROR   22:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:17:01 compute-0 openstack_network_exporter[205945]: ERROR   22:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:17:01 compute-0 openstack_network_exporter[205945]: ERROR   22:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:17:01 compute-0 openstack_network_exporter[205945]: ERROR   22:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:17:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:17:01 compute-0 openstack_network_exporter[205945]: ERROR   22:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:17:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:17:01 compute-0 podman[245259]: 2025-11-24 22:17:01.817420558 +0000 UTC m=+0.136316560 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:17:03 compute-0 nova_compute[189608]: 2025-11-24 22:17:03.016 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:03 compute-0 nova_compute[189608]: 2025-11-24 22:17:03.499 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:04 compute-0 podman[245281]: 2025-11-24 22:17:04.545831716 +0000 UTC m=+0.099169242 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 22:17:04 compute-0 podman[245282]: 2025-11-24 22:17:04.566016845 +0000 UTC m=+0.104031303 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:17:08 compute-0 nova_compute[189608]: 2025-11-24 22:17:08.020 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:08 compute-0 nova_compute[189608]: 2025-11-24 22:17:08.502 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:12 compute-0 podman[245322]: 2025-11-24 22:17:12.590695749 +0000 UTC m=+0.130622484 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:17:12 compute-0 nova_compute[189608]: 2025-11-24 22:17:12.803 189613 DEBUG oslo_concurrency.lockutils [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:17:12 compute-0 nova_compute[189608]: 2025-11-24 22:17:12.804 189613 DEBUG oslo_concurrency.lockutils [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:17:12 compute-0 nova_compute[189608]: 2025-11-24 22:17:12.805 189613 DEBUG oslo_concurrency.lockutils [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:17:12 compute-0 nova_compute[189608]: 2025-11-24 22:17:12.805 189613 DEBUG oslo_concurrency.lockutils [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:17:12 compute-0 nova_compute[189608]: 2025-11-24 22:17:12.806 189613 DEBUG oslo_concurrency.lockutils [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:17:12 compute-0 nova_compute[189608]: 2025-11-24 22:17:12.808 189613 INFO nova.compute.manager [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Terminating instance
Nov 24 22:17:12 compute-0 nova_compute[189608]: 2025-11-24 22:17:12.810 189613 DEBUG nova.compute.manager [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:17:12 compute-0 kernel: tap3223b8cb-74 (unregistering): left promiscuous mode
Nov 24 22:17:12 compute-0 NetworkManager[56413]: <info>  [1764022632.8750] device (tap3223b8cb-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:17:12 compute-0 ovn_controller[97889]: 2025-11-24T22:17:12Z|00050|binding|INFO|Releasing lport 3223b8cb-74bd-4db9-8dd2-441f7c81c71c from this chassis (sb_readonly=0)
Nov 24 22:17:12 compute-0 nova_compute[189608]: 2025-11-24 22:17:12.883 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:12 compute-0 ovn_controller[97889]: 2025-11-24T22:17:12Z|00051|binding|INFO|Setting lport 3223b8cb-74bd-4db9-8dd2-441f7c81c71c down in Southbound
Nov 24 22:17:12 compute-0 ovn_controller[97889]: 2025-11-24T22:17:12Z|00052|binding|INFO|Removing iface tap3223b8cb-74 ovn-installed in OVS
Nov 24 22:17:12 compute-0 nova_compute[189608]: 2025-11-24 22:17:12.886 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:12.902 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:40:c3 192.168.0.166'], port_security=['fa:16:3e:3e:40:c3 192.168.0.166'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-mikdi7lqlsa5-5i26uioiiugb-edy6vcflj352-port-dk6alb7qkgmn', 'neutron:cidrs': '192.168.0.166/24', 'neutron:device_id': '828f9a8f-602f-4ad5-a0b0-5a48a328d20e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-mikdi7lqlsa5-5i26uioiiugb-edy6vcflj352-port-dk6alb7qkgmn', 'neutron:project_id': '309342b7e3e849b2a5dd56651d8fa068', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c15160ba-5e91-4c5e-8c90-4266712c07a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.244', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b82cef4e-b720-4309-9703-51bb202fedba, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=3223b8cb-74bd-4db9-8dd2-441f7c81c71c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:17:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:12.903 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 3223b8cb-74bd-4db9-8dd2-441f7c81c71c in datapath 1d1b3625-954d-4d8b-8b3f-323c25d9b42a unbound from our chassis
Nov 24 22:17:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:12.906 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d1b3625-954d-4d8b-8b3f-323c25d9b42a
Nov 24 22:17:12 compute-0 nova_compute[189608]: 2025-11-24 22:17:12.908 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:12.921 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[d369eb26-b076-4a63-b984-201dd663245a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:17:12 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 24 22:17:12 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 41.746s CPU time.
Nov 24 22:17:12 compute-0 systemd-machined[155884]: Machine qemu-2-instance-00000002 terminated.
Nov 24 22:17:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:12.958 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3e37d9-9db2-454a-bb2c-aae5bf402ab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:17:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:12.962 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[15cd4d1a-63e6-4cf7-b531-608c6292a78b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:17:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:12.987 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[49253181-73f3-475c-8988-4dceb877f898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:17:13 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:13.002 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[f75570be-c3bf-44fe-84e5-934333817dc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d1b3625-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:e7:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373788, 'reachable_time': 34149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245358, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:17:13 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:13.017 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[ae24ed45-58d8-457a-bb00-145e186c1d33]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1d1b3625-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373805, 'tstamp': 373805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245359, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap1d1b3625-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373810, 'tstamp': 373810}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245359, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:17:13 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:13.019 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d1b3625-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.020 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.022 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.025 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:13 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:13.026 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d1b3625-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:17:13 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:13.027 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:17:13 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:13.027 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d1b3625-90, col_values=(('external_ids', {'iface-id': '13073b7d-8165-42cd-87f4-fb1eb15a5b94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:17:13 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:13.027 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.036 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.043 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.100 189613 INFO nova.virt.libvirt.driver [-] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Instance destroyed successfully.
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.101 189613 DEBUG nova.objects.instance [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'resources' on Instance uuid 828f9a8f-602f-4ad5-a0b0-5a48a328d20e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.245 189613 DEBUG nova.virt.libvirt.vif [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:05:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7lqlsa5-5i26uioiiugb-edy6vcflj352-vnf-mrrqmlwdy5wk',id=2,image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:06:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b438824c-ce52-4539-9db6-355e0ca018db'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='309342b7e3e849b2a5dd56651d8fa068',ramdisk_id='',reservation_id='r-51rhhz0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:06:05Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MTg4NjY0NzkwNjE2OTk0NzgzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgxODg2NjQ3OTA2MTY5OTQ3ODM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODE4ODY2NDc5MDYxNjk5NDc4Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgxODg2NjQ3OTA2MTY5OTQ3ODM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MTg4NjY0NzkwNjE2OTk0NzgzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MTg4NjY0NzkwNjE2OTk0NzgzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Nov 24 22:17:13 compute-0 nova_compute[189608]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODE4ODY2NDc5MDYxNjk5NDc4Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgxODg2NjQ3OTA2MTY5OTQ3ODM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MTg4NjY0NzkwNjE2OTk0NzgzPT0tLQo=',user_id='572aaac113f54af8a894707849aed6bf',uuid=828f9a8f-602f-4ad5-a0b0-5a48a328d20e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.246 189613 DEBUG nova.network.os_vif_util [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converting VIF {"id": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "address": "fa:16:3e:3e:40:c3", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3223b8cb-74", "ovs_interfaceid": "3223b8cb-74bd-4db9-8dd2-441f7c81c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.246 189613 DEBUG nova.network.os_vif_util [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:40:c3,bridge_name='br-int',has_traffic_filtering=True,id=3223b8cb-74bd-4db9-8dd2-441f7c81c71c,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3223b8cb-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.247 189613 DEBUG os_vif [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:40:c3,bridge_name='br-int',has_traffic_filtering=True,id=3223b8cb-74bd-4db9-8dd2-441f7c81c71c,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3223b8cb-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.249 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.249 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3223b8cb-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.251 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.253 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.257 189613 INFO os_vif [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:40:c3,bridge_name='br-int',has_traffic_filtering=True,id=3223b8cb-74bd-4db9-8dd2-441f7c81c71c,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3223b8cb-74')
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.258 189613 INFO nova.virt.libvirt.driver [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Deleting instance files /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e_del
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.259 189613 INFO nova.virt.libvirt.driver [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Deletion of /var/lib/nova/instances/828f9a8f-602f-4ad5-a0b0-5a48a328d20e_del complete
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.341 189613 DEBUG nova.virt.libvirt.host [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.342 189613 INFO nova.virt.libvirt.host [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] UEFI support detected
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.344 189613 INFO nova.compute.manager [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Took 0.53 seconds to destroy the instance on the hypervisor.
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.345 189613 DEBUG oslo.service.loopingcall [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.345 189613 DEBUG nova.compute.manager [-] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.345 189613 DEBUG nova.network.neutron [-] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.447 189613 DEBUG nova.compute.manager [req-f97e9b7c-c450-4305-b08b-84ef46b04f19 req-121391a8-bdb1-4a44-aa0c-a70992c3a4a9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Received event network-vif-unplugged-3223b8cb-74bd-4db9-8dd2-441f7c81c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.447 189613 DEBUG oslo_concurrency.lockutils [req-f97e9b7c-c450-4305-b08b-84ef46b04f19 req-121391a8-bdb1-4a44-aa0c-a70992c3a4a9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.447 189613 DEBUG oslo_concurrency.lockutils [req-f97e9b7c-c450-4305-b08b-84ef46b04f19 req-121391a8-bdb1-4a44-aa0c-a70992c3a4a9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.448 189613 DEBUG oslo_concurrency.lockutils [req-f97e9b7c-c450-4305-b08b-84ef46b04f19 req-121391a8-bdb1-4a44-aa0c-a70992c3a4a9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.448 189613 DEBUG nova.compute.manager [req-f97e9b7c-c450-4305-b08b-84ef46b04f19 req-121391a8-bdb1-4a44-aa0c-a70992c3a4a9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] No waiting events found dispatching network-vif-unplugged-3223b8cb-74bd-4db9-8dd2-441f7c81c71c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.448 189613 DEBUG nova.compute.manager [req-f97e9b7c-c450-4305-b08b-84ef46b04f19 req-121391a8-bdb1-4a44-aa0c-a70992c3a4a9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Received event network-vif-unplugged-3223b8cb-74bd-4db9-8dd2-441f7c81c71c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.504 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:13 compute-0 rsyslogd[237036]: message too long (8192) with configured size 8096, begin of message is: 2025-11-24 22:17:13.245 189613 DEBUG nova.virt.libvirt.vif [None req-a46e4e5c-c8 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 24 22:17:13 compute-0 nova_compute[189608]: 2025-11-24 22:17:13.790 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:17:14 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:14.023 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:17:14 compute-0 nova_compute[189608]: 2025-11-24 22:17:14.023 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:14 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:14.024 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.548 189613 DEBUG nova.compute.manager [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Received event network-vif-plugged-3223b8cb-74bd-4db9-8dd2-441f7c81c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.548 189613 DEBUG oslo_concurrency.lockutils [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.548 189613 DEBUG oslo_concurrency.lockutils [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.548 189613 DEBUG oslo_concurrency.lockutils [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.549 189613 DEBUG nova.compute.manager [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] No waiting events found dispatching network-vif-plugged-3223b8cb-74bd-4db9-8dd2-441f7c81c71c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.549 189613 WARNING nova.compute.manager [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Received unexpected event network-vif-plugged-3223b8cb-74bd-4db9-8dd2-441f7c81c71c for instance with vm_state active and task_state deleting.
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.549 189613 DEBUG nova.compute.manager [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Received event network-changed-3223b8cb-74bd-4db9-8dd2-441f7c81c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.549 189613 DEBUG nova.compute.manager [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Refreshing instance network info cache due to event network-changed-3223b8cb-74bd-4db9-8dd2-441f7c81c71c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.550 189613 DEBUG oslo_concurrency.lockutils [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.550 189613 DEBUG oslo_concurrency.lockutils [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.550 189613 DEBUG nova.network.neutron [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Refreshing network info cache for port 3223b8cb-74bd-4db9-8dd2-441f7c81c71c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.737 189613 DEBUG nova.network.neutron [-] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.764 189613 INFO nova.compute.manager [-] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Took 2.42 seconds to deallocate network for instance.
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.787 189613 INFO nova.network.neutron [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Port 3223b8cb-74bd-4db9-8dd2-441f7c81c71c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.788 189613 DEBUG nova.network.neutron [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.807 189613 DEBUG oslo_concurrency.lockutils [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.808 189613 DEBUG oslo_concurrency.lockutils [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.810 189613 DEBUG oslo_concurrency.lockutils [req-93431bd8-8f0e-4fb6-b074-d0360d009ab2 req-99c4ae88-1620-4333-9596-4374b577995d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-828f9a8f-602f-4ad5-a0b0-5a48a328d20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.969 189613 DEBUG nova.compute.provider_tree [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.982 189613 DEBUG nova.scheduler.client.report [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:17:15 compute-0 nova_compute[189608]: 2025-11-24 22:17:15.997 189613 DEBUG oslo_concurrency.lockutils [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:17:16 compute-0 nova_compute[189608]: 2025-11-24 22:17:16.022 189613 INFO nova.scheduler.client.report [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Deleted allocations for instance 828f9a8f-602f-4ad5-a0b0-5a48a328d20e
Nov 24 22:17:16 compute-0 nova_compute[189608]: 2025-11-24 22:17:16.095 189613 DEBUG oslo_concurrency.lockutils [None req-a46e4e5c-c8c6-450f-a75e-62deccdb53a1 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "828f9a8f-602f-4ad5-a0b0-5a48a328d20e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:17:16 compute-0 nova_compute[189608]: 2025-11-24 22:17:16.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:17:16 compute-0 nova_compute[189608]: 2025-11-24 22:17:16.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:17:17 compute-0 nova_compute[189608]: 2025-11-24 22:17:17.234 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:17:17 compute-0 nova_compute[189608]: 2025-11-24 22:17:17.234 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:17:17 compute-0 nova_compute[189608]: 2025-11-24 22:17:17.234 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:17:18 compute-0 nova_compute[189608]: 2025-11-24 22:17:18.252 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:18 compute-0 nova_compute[189608]: 2025-11-24 22:17:18.508 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:18 compute-0 podman[245381]: 2025-11-24 22:17:18.577799995 +0000 UTC m=+0.123906974 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:17:19 compute-0 nova_compute[189608]: 2025-11-24 22:17:19.585 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Updating instance_info_cache with network_info: [{"id": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "address": "fa:16:3e:04:76:e6", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2301b73c-6b", "ovs_interfaceid": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:17:19 compute-0 nova_compute[189608]: 2025-11-24 22:17:19.599 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:17:19 compute-0 nova_compute[189608]: 2025-11-24 22:17:19.600 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:17:22 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:22.028 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:17:22 compute-0 podman[245403]: 2025-11-24 22:17:22.580084224 +0000 UTC m=+0.131176701 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 22:17:23 compute-0 nova_compute[189608]: 2025-11-24 22:17:23.257 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:23 compute-0 nova_compute[189608]: 2025-11-24 22:17:23.511 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:23 compute-0 nova_compute[189608]: 2025-11-24 22:17:23.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:17:23 compute-0 nova_compute[189608]: 2025-11-24 22:17:23.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:17:24 compute-0 nova_compute[189608]: 2025-11-24 22:17:24.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:17:25 compute-0 nova_compute[189608]: 2025-11-24 22:17:25.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:17:25 compute-0 nova_compute[189608]: 2025-11-24 22:17:25.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:17:25 compute-0 nova_compute[189608]: 2025-11-24 22:17:25.827 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:17:25 compute-0 nova_compute[189608]: 2025-11-24 22:17:25.827 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:17:25 compute-0 nova_compute[189608]: 2025-11-24 22:17:25.828 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:17:25 compute-0 nova_compute[189608]: 2025-11-24 22:17:25.828 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:17:25 compute-0 nova_compute[189608]: 2025-11-24 22:17:25.927 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:17:25 compute-0 nova_compute[189608]: 2025-11-24 22:17:25.990 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:17:25 compute-0 nova_compute[189608]: 2025-11-24 22:17:25.992 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.074 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.076 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.137 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.139 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.239 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.246 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.327 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.329 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.431 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.432 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.488 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.490 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.561 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.568 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.654 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.655 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.711 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.712 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.775 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.776 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:17:26 compute-0 nova_compute[189608]: 2025-11-24 22:17:26.852 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:17:27 compute-0 nova_compute[189608]: 2025-11-24 22:17:27.211 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:17:27 compute-0 nova_compute[189608]: 2025-11-24 22:17:27.214 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4763MB free_disk=72.16130447387695GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:17:27 compute-0 nova_compute[189608]: 2025-11-24 22:17:27.215 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:17:27 compute-0 nova_compute[189608]: 2025-11-24 22:17:27.216 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:17:27 compute-0 nova_compute[189608]: 2025-11-24 22:17:27.318 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:17:27 compute-0 nova_compute[189608]: 2025-11-24 22:17:27.318 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 672e3ced-b18a-4ce7-aace-eb5c076ddb88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:17:27 compute-0 nova_compute[189608]: 2025-11-24 22:17:27.319 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 7e7d375c-a42c-41c5-934f-c46941a40067 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:17:27 compute-0 nova_compute[189608]: 2025-11-24 22:17:27.319 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:17:27 compute-0 nova_compute[189608]: 2025-11-24 22:17:27.319 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:17:27 compute-0 nova_compute[189608]: 2025-11-24 22:17:27.434 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:17:27 compute-0 nova_compute[189608]: 2025-11-24 22:17:27.458 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:17:27 compute-0 nova_compute[189608]: 2025-11-24 22:17:27.498 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:17:27 compute-0 nova_compute[189608]: 2025-11-24 22:17:27.499 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:17:27 compute-0 podman[245462]: 2025-11-24 22:17:27.568436491 +0000 UTC m=+0.099606057 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 22:17:27 compute-0 podman[245461]: 2025-11-24 22:17:27.572816407 +0000 UTC m=+0.108585816 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Nov 24 22:17:27 compute-0 podman[245460]: 2025-11-24 22:17:27.592545012 +0000 UTC m=+0.119586209 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.4, config_id=edpm)
Nov 24 22:17:28 compute-0 nova_compute[189608]: 2025-11-24 22:17:28.096 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764022633.0943282, 828f9a8f-602f-4ad5-a0b0-5a48a328d20e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:17:28 compute-0 nova_compute[189608]: 2025-11-24 22:17:28.097 189613 INFO nova.compute.manager [-] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] VM Stopped (Lifecycle Event)
Nov 24 22:17:28 compute-0 nova_compute[189608]: 2025-11-24 22:17:28.118 189613 DEBUG nova.compute.manager [None req-85c9334e-e545-4cb1-b852-b421f876fbce - - - - - -] [instance: 828f9a8f-602f-4ad5-a0b0-5a48a328d20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:17:28 compute-0 nova_compute[189608]: 2025-11-24 22:17:28.260 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:28 compute-0 nova_compute[189608]: 2025-11-24 22:17:28.514 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:29 compute-0 nova_compute[189608]: 2025-11-24 22:17:29.497 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:17:29 compute-0 podman[203795]: time="2025-11-24T22:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:17:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:17:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 24 22:17:29 compute-0 nova_compute[189608]: 2025-11-24 22:17:29.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:17:29 compute-0 nova_compute[189608]: 2025-11-24 22:17:29.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:17:29 compute-0 nova_compute[189608]: 2025-11-24 22:17:29.795 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:17:31 compute-0 openstack_network_exporter[205945]: ERROR   22:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:17:31 compute-0 openstack_network_exporter[205945]: ERROR   22:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:17:31 compute-0 openstack_network_exporter[205945]: ERROR   22:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:17:31 compute-0 openstack_network_exporter[205945]: ERROR   22:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:17:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:17:31 compute-0 openstack_network_exporter[205945]: ERROR   22:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:17:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:17:32 compute-0 podman[245516]: 2025-11-24 22:17:32.580952059 +0000 UTC m=+0.125110351 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:17:33 compute-0 nova_compute[189608]: 2025-11-24 22:17:33.263 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:33 compute-0 nova_compute[189608]: 2025-11-24 22:17:33.517 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:35 compute-0 podman[245540]: 2025-11-24 22:17:35.575930487 +0000 UTC m=+0.113626863 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent)
Nov 24 22:17:35 compute-0 podman[245539]: 2025-11-24 22:17:35.625602126 +0000 UTC m=+0.170542697 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller)
Nov 24 22:17:38 compute-0 nova_compute[189608]: 2025-11-24 22:17:38.267 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:38 compute-0 nova_compute[189608]: 2025-11-24 22:17:38.521 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:43 compute-0 nova_compute[189608]: 2025-11-24 22:17:43.271 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:43 compute-0 nova_compute[189608]: 2025-11-24 22:17:43.525 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:43 compute-0 podman[245583]: 2025-11-24 22:17:43.574446933 +0000 UTC m=+0.126390130 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:17:48 compute-0 nova_compute[189608]: 2025-11-24 22:17:48.273 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:48 compute-0 nova_compute[189608]: 2025-11-24 22:17:48.528 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:48 compute-0 ovn_controller[97889]: 2025-11-24T22:17:48Z|00053|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Nov 24 22:17:49 compute-0 podman[245607]: 2025-11-24 22:17:49.577664754 +0000 UTC m=+0.122336374 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 22:17:53 compute-0 nova_compute[189608]: 2025-11-24 22:17:53.277 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:53 compute-0 nova_compute[189608]: 2025-11-24 22:17:53.529 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:53 compute-0 podman[245626]: 2025-11-24 22:17:53.580930723 +0000 UTC m=+0.124373257 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 24 22:17:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:54.576 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:17:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:54.577 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:17:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:17:54.577 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:17:58 compute-0 nova_compute[189608]: 2025-11-24 22:17:58.280 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:58 compute-0 nova_compute[189608]: 2025-11-24 22:17:58.530 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:17:58 compute-0 podman[245647]: 2025-11-24 22:17:58.544630641 +0000 UTC m=+0.087634673 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:17:58 compute-0 podman[245645]: 2025-11-24 22:17:58.54554571 +0000 UTC m=+0.100430922 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, release=1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 24 22:17:58 compute-0 podman[245646]: 2025-11-24 22:17:58.58213733 +0000 UTC m=+0.126863535 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 24 22:17:59 compute-0 podman[203795]: time="2025-11-24T22:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:17:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:17:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 24 22:18:01 compute-0 openstack_network_exporter[205945]: ERROR   22:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:18:01 compute-0 openstack_network_exporter[205945]: ERROR   22:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:18:01 compute-0 openstack_network_exporter[205945]: ERROR   22:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:18:01 compute-0 openstack_network_exporter[205945]: ERROR   22:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:18:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:18:01 compute-0 openstack_network_exporter[205945]: ERROR   22:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:18:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:18:03 compute-0 nova_compute[189608]: 2025-11-24 22:18:03.283 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:03 compute-0 nova_compute[189608]: 2025-11-24 22:18:03.534 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:03 compute-0 podman[245700]: 2025-11-24 22:18:03.573011763 +0000 UTC m=+0.111713853 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:18:06 compute-0 podman[245724]: 2025-11-24 22:18:06.625083602 +0000 UTC m=+0.163477457 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 24 22:18:06 compute-0 podman[245723]: 2025-11-24 22:18:06.637129067 +0000 UTC m=+0.179826846 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:18:08 compute-0 nova_compute[189608]: 2025-11-24 22:18:08.286 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:08 compute-0 nova_compute[189608]: 2025-11-24 22:18:08.535 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:13 compute-0 nova_compute[189608]: 2025-11-24 22:18:13.292 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:13 compute-0 nova_compute[189608]: 2025-11-24 22:18:13.547 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:14 compute-0 podman[245764]: 2025-11-24 22:18:14.560650155 +0000 UTC m=+0.111254508 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.624 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.625 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.625 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.633 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.634 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.634 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '672e3ced-b18a-4ce7-aace-eb5c076ddb88', 'name': 'vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.634 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.641 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7e7d375c-a42c-41c5-934f-c46941a40067', 'name': 'vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.644 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ea741b45-c6b4-41c0-a70f-c752b616faa2', 'name': 'test_0', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.645 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:18:17.645790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.652 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.660 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.668 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.669 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.670 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.670 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.670 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.671 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.672 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.671 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:18:17.670508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.673 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.673 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.673 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.674 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:18:17.674490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.723 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.724 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.724 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.764 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.765 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.765 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 nova_compute[189608]: 2025-11-24 22:18:17.796 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:18:17 compute-0 nova_compute[189608]: 2025-11-24 22:18:17.797 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.801 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.802 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.802 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.802 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.803 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.803 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.803 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.803 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.804 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:18:17.803547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.903 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.904 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.904 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.998 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.999 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:17.999 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.081 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.082 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.082 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.083 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.083 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.latency volume: 822487867 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.084 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.latency volume: 92574229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.084 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.latency volume: 84915884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.085 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 742675991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:18:18.083801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.085 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 148600369 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.085 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 107984847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.085 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 667223647 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.086 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 118019506 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.086 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 84934831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.086 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.087 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.087 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:18:18.087449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.113 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/cpu volume: 39230000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.142 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/cpu volume: 38070000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.171 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/cpu volume: 44120000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.171 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.172 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.172 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.172 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.172 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.172 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.172 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.173 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.173 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.173 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.173 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.174 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.174 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.174 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.175 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.175 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.175 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.175 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.175 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.175 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.175 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.176 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.176 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.176 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.176 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.177 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.177 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.177 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:18:18.172503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.178 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.178 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:18:18.175664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.178 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.180 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.bytes volume: 41844736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.180 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.180 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.181 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.181 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.181 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.181 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:18:18.180021) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.182 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.182 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.183 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.183 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.184 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.184 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.latency volume: 2777922560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.184 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.latency volume: 9784234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.184 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.185 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 2394094265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.185 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 18006599 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.185 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.185 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 2026985911 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.186 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 25037348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.186 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:18:18.184110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.186 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.187 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.187 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.187 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.187 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.188 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.188 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.188 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.189 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.189 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.189 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.190 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.190 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.191 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.191 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:18:18.187478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.191 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:18:18.191307) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.192 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.192 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.193 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:18:18.193443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.193 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.194 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.194 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.194 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.194 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.196 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.196 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:18:18.195032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.196 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.196 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.196 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.197 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.197 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.198 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:18:18.196658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.198 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.199 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:18:18.197731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.199 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.200 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.200 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.200 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.200 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.201 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.201 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:18:18.199797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.201 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.201 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.201 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.202 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.203 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.203 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:18:18.201263) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.203 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.204 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.204 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.204 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:18:18.202776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.204 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.204 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.204 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:18:18.204328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.205 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.205 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.205 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/memory.usage volume: 48.890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.206 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:18:18.205892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.206 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.207 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.207 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.207 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.207 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.207 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.207 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.208 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.208 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.208 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.208 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.208 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.209 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:18:18.207751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.209 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.209 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.209 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.210 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.210 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.210 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.211 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.211 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.211 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.211 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.212 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.212 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.212 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.212 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.212 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.213 14 DEBUG ceilometer.compute.pollsters [-] 672e3ced-b18a-4ce7-aace-eb5c076ddb88/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.213 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.213 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.213 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:18:18.209546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:18:18.212942) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:18:18.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:18:18 compute-0 nova_compute[189608]: 2025-11-24 22:18:18.236 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:18:18 compute-0 nova_compute[189608]: 2025-11-24 22:18:18.237 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:18:18 compute-0 nova_compute[189608]: 2025-11-24 22:18:18.238 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:18:18 compute-0 nova_compute[189608]: 2025-11-24 22:18:18.300 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:18 compute-0 nova_compute[189608]: 2025-11-24 22:18:18.542 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:19 compute-0 nova_compute[189608]: 2025-11-24 22:18:19.388 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Updating instance_info_cache with network_info: [{"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:18:19 compute-0 nova_compute[189608]: 2025-11-24 22:18:19.434 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:18:19 compute-0 nova_compute[189608]: 2025-11-24 22:18:19.434 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:18:20 compute-0 podman[245792]: 2025-11-24 22:18:20.617208001 +0000 UTC m=+0.167795611 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:18:20 compute-0 sshd-session[245790]: Invalid user solv from 193.32.162.145 port 58472
Nov 24 22:18:20 compute-0 sshd-session[245790]: Connection closed by invalid user solv 193.32.162.145 port 58472 [preauth]
Nov 24 22:18:23 compute-0 nova_compute[189608]: 2025-11-24 22:18:23.302 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:23 compute-0 nova_compute[189608]: 2025-11-24 22:18:23.547 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:24 compute-0 podman[245812]: 2025-11-24 22:18:24.624723311 +0000 UTC m=+0.166787669 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 22:18:24 compute-0 nova_compute[189608]: 2025-11-24 22:18:24.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:18:25 compute-0 nova_compute[189608]: 2025-11-24 22:18:25.791 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:18:25 compute-0 nova_compute[189608]: 2025-11-24 22:18:25.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:18:25 compute-0 nova_compute[189608]: 2025-11-24 22:18:25.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:18:25 compute-0 nova_compute[189608]: 2025-11-24 22:18:25.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:18:25 compute-0 nova_compute[189608]: 2025-11-24 22:18:25.830 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:18:25 compute-0 nova_compute[189608]: 2025-11-24 22:18:25.830 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:18:25 compute-0 nova_compute[189608]: 2025-11-24 22:18:25.831 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:18:25 compute-0 nova_compute[189608]: 2025-11-24 22:18:25.832 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:18:25 compute-0 nova_compute[189608]: 2025-11-24 22:18:25.967 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.053 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.055 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.123 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.125 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.207 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.209 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.304 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.315 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.415 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.418 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.516 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.518 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.589 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.592 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.656 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.666 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.763 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.765 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.861 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.864 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.933 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:18:26 compute-0 nova_compute[189608]: 2025-11-24 22:18:26.935 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.022 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.488 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.490 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4757MB free_disk=72.16130447387695GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.491 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.492 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.591 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.592 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 672e3ced-b18a-4ce7-aace-eb5c076ddb88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.592 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 7e7d375c-a42c-41c5-934f-c46941a40067 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.593 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.593 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.684 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.705 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.707 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:18:27 compute-0 nova_compute[189608]: 2025-11-24 22:18:27.708 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:18:28 compute-0 nova_compute[189608]: 2025-11-24 22:18:28.308 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:28 compute-0 nova_compute[189608]: 2025-11-24 22:18:28.551 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:29 compute-0 podman[245868]: 2025-11-24 22:18:29.575406793 +0000 UTC m=+0.113332864 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, config_id=edpm, vcs-type=git)
Nov 24 22:18:29 compute-0 podman[245869]: 2025-11-24 22:18:29.597912255 +0000 UTC m=+0.122825940 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible)
Nov 24 22:18:29 compute-0 podman[245867]: 2025-11-24 22:18:29.60002918 +0000 UTC m=+0.141545573 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, vcs-type=git, version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, architecture=x86_64, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Nov 24 22:18:29 compute-0 podman[203795]: time="2025-11-24T22:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:18:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:18:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Nov 24 22:18:30 compute-0 nova_compute[189608]: 2025-11-24 22:18:30.710 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:18:30 compute-0 nova_compute[189608]: 2025-11-24 22:18:30.712 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:18:30 compute-0 nova_compute[189608]: 2025-11-24 22:18:30.713 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:18:30 compute-0 nova_compute[189608]: 2025-11-24 22:18:30.796 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:18:31 compute-0 openstack_network_exporter[205945]: ERROR   22:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:18:31 compute-0 openstack_network_exporter[205945]: ERROR   22:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:18:31 compute-0 openstack_network_exporter[205945]: ERROR   22:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:18:31 compute-0 openstack_network_exporter[205945]: ERROR   22:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:18:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:18:31 compute-0 openstack_network_exporter[205945]: ERROR   22:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:18:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:18:33 compute-0 nova_compute[189608]: 2025-11-24 22:18:33.312 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:33 compute-0 nova_compute[189608]: 2025-11-24 22:18:33.554 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:34 compute-0 podman[245921]: 2025-11-24 22:18:34.566804204 +0000 UTC m=+0.115524932 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:18:37 compute-0 podman[245945]: 2025-11-24 22:18:37.561531664 +0000 UTC m=+0.108554455 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 24 22:18:37 compute-0 podman[245944]: 2025-11-24 22:18:37.648627939 +0000 UTC m=+0.202144133 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 22:18:38 compute-0 nova_compute[189608]: 2025-11-24 22:18:38.316 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:38 compute-0 nova_compute[189608]: 2025-11-24 22:18:38.555 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:43 compute-0 nova_compute[189608]: 2025-11-24 22:18:43.320 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:43 compute-0 nova_compute[189608]: 2025-11-24 22:18:43.557 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:44 compute-0 podman[245986]: 2025-11-24 22:18:44.801212264 +0000 UTC m=+0.093145794 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:18:48 compute-0 nova_compute[189608]: 2025-11-24 22:18:48.324 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:48 compute-0 nova_compute[189608]: 2025-11-24 22:18:48.561 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:51 compute-0 podman[246011]: 2025-11-24 22:18:51.57998076 +0000 UTC m=+0.124917514 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:18:53 compute-0 nova_compute[189608]: 2025-11-24 22:18:53.327 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:53 compute-0 nova_compute[189608]: 2025-11-24 22:18:53.564 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:53 compute-0 sshd-session[246031]: Invalid user solana from 45.148.10.240 port 51884
Nov 24 22:18:53 compute-0 sshd-session[246031]: Connection closed by invalid user solana 45.148.10.240 port 51884 [preauth]
Nov 24 22:18:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:18:54.577 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:18:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:18:54.578 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:18:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:18:54.579 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:18:55 compute-0 podman[246033]: 2025-11-24 22:18:55.562484932 +0000 UTC m=+0.102179896 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 22:18:58 compute-0 nova_compute[189608]: 2025-11-24 22:18:58.330 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:58 compute-0 nova_compute[189608]: 2025-11-24 22:18:58.568 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:18:59 compute-0 podman[203795]: time="2025-11-24T22:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:18:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:18:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 24 22:19:00 compute-0 podman[246054]: 2025-11-24 22:19:00.58270013 +0000 UTC m=+0.113404845 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 24 22:19:00 compute-0 podman[246053]: 2025-11-24 22:19:00.586862851 +0000 UTC m=+0.129258061 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, container_name=kepler, io.openshift.expose-services=, release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 24 22:19:00 compute-0 podman[246055]: 2025-11-24 22:19:00.612615734 +0000 UTC m=+0.140644396 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 24 22:19:01 compute-0 openstack_network_exporter[205945]: ERROR   22:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:19:01 compute-0 openstack_network_exporter[205945]: ERROR   22:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:19:01 compute-0 openstack_network_exporter[205945]: ERROR   22:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:19:01 compute-0 openstack_network_exporter[205945]: ERROR   22:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:19:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:19:01 compute-0 openstack_network_exporter[205945]: ERROR   22:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:19:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:19:03 compute-0 nova_compute[189608]: 2025-11-24 22:19:03.335 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:03 compute-0 nova_compute[189608]: 2025-11-24 22:19:03.571 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:05 compute-0 podman[246111]: 2025-11-24 22:19:05.572230092 +0000 UTC m=+0.110326270 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:19:08 compute-0 nova_compute[189608]: 2025-11-24 22:19:08.341 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:08 compute-0 podman[246135]: 2025-11-24 22:19:08.519191744 +0000 UTC m=+0.127614448 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 24 22:19:08 compute-0 nova_compute[189608]: 2025-11-24 22:19:08.573 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:08 compute-0 podman[246134]: 2025-11-24 22:19:08.599671223 +0000 UTC m=+0.214033673 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Nov 24 22:19:13 compute-0 nova_compute[189608]: 2025-11-24 22:19:13.346 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:13 compute-0 nova_compute[189608]: 2025-11-24 22:19:13.576 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:15 compute-0 podman[246174]: 2025-11-24 22:19:15.552442922 +0000 UTC m=+0.100055350 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:19:15 compute-0 nova_compute[189608]: 2025-11-24 22:19:15.790 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:19:16 compute-0 nova_compute[189608]: 2025-11-24 22:19:16.991 189613 DEBUG nova.compute.manager [req-03fb25e1-d300-4146-9514-a7cea8f8261e req-f7992b20-628f-4522-a006-3cc143fa13ce c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Received event network-changed-2301b73c-6b2a-4a4b-afa2-7d6aa710652b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:19:16 compute-0 nova_compute[189608]: 2025-11-24 22:19:16.992 189613 DEBUG nova.compute.manager [req-03fb25e1-d300-4146-9514-a7cea8f8261e req-f7992b20-628f-4522-a006-3cc143fa13ce c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Refreshing instance network info cache due to event network-changed-2301b73c-6b2a-4a4b-afa2-7d6aa710652b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:19:16 compute-0 nova_compute[189608]: 2025-11-24 22:19:16.993 189613 DEBUG oslo_concurrency.lockutils [req-03fb25e1-d300-4146-9514-a7cea8f8261e req-f7992b20-628f-4522-a006-3cc143fa13ce c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:19:16 compute-0 nova_compute[189608]: 2025-11-24 22:19:16.993 189613 DEBUG oslo_concurrency.lockutils [req-03fb25e1-d300-4146-9514-a7cea8f8261e req-f7992b20-628f-4522-a006-3cc143fa13ce c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:19:16 compute-0 nova_compute[189608]: 2025-11-24 22:19:16.994 189613 DEBUG nova.network.neutron [req-03fb25e1-d300-4146-9514-a7cea8f8261e req-f7992b20-628f-4522-a006-3cc143fa13ce c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Refreshing network info cache for port 2301b73c-6b2a-4a4b-afa2-7d6aa710652b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.301 189613 DEBUG oslo_concurrency.lockutils [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.301 189613 DEBUG oslo_concurrency.lockutils [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.302 189613 DEBUG oslo_concurrency.lockutils [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.302 189613 DEBUG oslo_concurrency.lockutils [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.303 189613 DEBUG oslo_concurrency.lockutils [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.304 189613 INFO nova.compute.manager [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Terminating instance
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.306 189613 DEBUG nova.compute.manager [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:19:17 compute-0 kernel: tap2301b73c-6b (unregistering): left promiscuous mode
Nov 24 22:19:17 compute-0 NetworkManager[56413]: <info>  [1764022757.3557] device (tap2301b73c-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:19:17 compute-0 ovn_controller[97889]: 2025-11-24T22:19:17Z|00054|binding|INFO|Releasing lport 2301b73c-6b2a-4a4b-afa2-7d6aa710652b from this chassis (sb_readonly=0)
Nov 24 22:19:17 compute-0 ovn_controller[97889]: 2025-11-24T22:19:17Z|00055|binding|INFO|Setting lport 2301b73c-6b2a-4a4b-afa2-7d6aa710652b down in Southbound
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.370 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:17 compute-0 ovn_controller[97889]: 2025-11-24T22:19:17Z|00056|binding|INFO|Removing iface tap2301b73c-6b ovn-installed in OVS
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.374 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.380 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:76:e6 192.168.0.182'], port_security=['fa:16:3e:04:76:e6 192.168.0.182'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-mikdi7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-port-wwdx6emlbsja', 'neutron:cidrs': '192.168.0.182/24', 'neutron:device_id': '672e3ced-b18a-4ce7-aace-eb5c076ddb88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-mikdi7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-port-wwdx6emlbsja', 'neutron:project_id': '309342b7e3e849b2a5dd56651d8fa068', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c15160ba-5e91-4c5e-8c90-4266712c07a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b82cef4e-b720-4309-9703-51bb202fedba, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=2301b73c-6b2a-4a4b-afa2-7d6aa710652b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.382 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 2301b73c-6b2a-4a4b-afa2-7d6aa710652b in datapath 1d1b3625-954d-4d8b-8b3f-323c25d9b42a unbound from our chassis
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.383 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d1b3625-954d-4d8b-8b3f-323c25d9b42a
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.402 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.419 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[e67faab8-26a4-4eb2-a44c-938307c6ec5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:19:17 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 24 22:19:17 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 41.240s CPU time.
Nov 24 22:19:17 compute-0 systemd-machined[155884]: Machine qemu-3-instance-00000003 terminated.
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.466 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[4a891554-ca90-49fd-88e6-ddf73ecb6da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.471 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[a04d3688-0691-4318-ba16-9dbe10693c18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.504 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef22156-10ec-4ae2-a0a8-6c0c2f43d07a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.534 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[ac120d4b-b7d6-42a7-b708-455ed0a664b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d1b3625-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:e7:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373788, 'reachable_time': 18621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246208, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.558 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[fce7a519-2527-4c30-a55d-8033d365e026]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1d1b3625-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373805, 'tstamp': 373805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246215, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap1d1b3625-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373810, 'tstamp': 373810}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246215, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.560 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d1b3625-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.563 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.569 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.570 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d1b3625-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.571 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.571 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d1b3625-90, col_values=(('external_ids', {'iface-id': '13073b7d-8165-42cd-87f4-fb1eb15a5b94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.571 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.608 189613 INFO nova.virt.libvirt.driver [-] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Instance destroyed successfully.
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.609 189613 DEBUG nova.objects.instance [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'resources' on Instance uuid 672e3ced-b18a-4ce7-aace-eb5c076ddb88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.622 189613 DEBUG nova.virt.libvirt.vif [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:11:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7lqlsa5-oenbgqpbbpdb-jsa5qovsyfxv-vnf-d3kwbkhn7jvw',id=3,image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:11:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b438824c-ce52-4539-9db6-355e0ca018db'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='309342b7e3e849b2a5dd56651d8fa068',ramdisk_id='',reservation_id='r-t3n2k8ox',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:11:24Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04Mjk3Mzg2MTEwNzA2NDc0NzA1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyOTczODYxMTA3MDY0NzQ3MDU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI5NzM4NjExMDcwNjQ3NDcwNT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyOTczODYxMTA3MDY0NzQ3MDU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04Mjk3Mzg2MTEwNzA2NDc0NzA1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04Mjk3Mzg2MTEwNzA2NDc0NzA1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Nov 24 22:19:17 compute-0 nova_compute[189608]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI5NzM4NjExMDcwNjQ3NDcwNT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyOTczODYxMTA3MDY0NzQ3MDU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04Mjk3Mzg2MTEwNzA2NDc0NzA1PT0tLQo=',user_id='572aaac113f54af8a894707849aed6bf',uuid=672e3ced-b18a-4ce7-aace-eb5c076ddb88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "address": "fa:16:3e:04:76:e6", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2301b73c-6b", "ovs_interfaceid": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.623 189613 DEBUG nova.network.os_vif_util [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converting VIF {"id": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "address": "fa:16:3e:04:76:e6", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2301b73c-6b", "ovs_interfaceid": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.624 189613 DEBUG nova.network.os_vif_util [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:76:e6,bridge_name='br-int',has_traffic_filtering=True,id=2301b73c-6b2a-4a4b-afa2-7d6aa710652b,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2301b73c-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.625 189613 DEBUG os_vif [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:76:e6,bridge_name='br-int',has_traffic_filtering=True,id=2301b73c-6b2a-4a4b-afa2-7d6aa710652b,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2301b73c-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.628 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.628 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2301b73c-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.631 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.633 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.638 189613 INFO os_vif [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:76:e6,bridge_name='br-int',has_traffic_filtering=True,id=2301b73c-6b2a-4a4b-afa2-7d6aa710652b,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2301b73c-6b')
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.639 189613 INFO nova.virt.libvirt.driver [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Deleting instance files /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88_del
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.640 189613 INFO nova.virt.libvirt.driver [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Deletion of /var/lib/nova/instances/672e3ced-b18a-4ce7-aace-eb5c076ddb88_del complete
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.649 189613 DEBUG nova.compute.manager [req-61d9d337-63a7-4345-b516-af7e8fd72bfe req-9295a43c-95bd-47af-97ae-6c374adce687 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Received event network-vif-unplugged-2301b73c-6b2a-4a4b-afa2-7d6aa710652b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.650 189613 DEBUG oslo_concurrency.lockutils [req-61d9d337-63a7-4345-b516-af7e8fd72bfe req-9295a43c-95bd-47af-97ae-6c374adce687 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.650 189613 DEBUG oslo_concurrency.lockutils [req-61d9d337-63a7-4345-b516-af7e8fd72bfe req-9295a43c-95bd-47af-97ae-6c374adce687 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.651 189613 DEBUG oslo_concurrency.lockutils [req-61d9d337-63a7-4345-b516-af7e8fd72bfe req-9295a43c-95bd-47af-97ae-6c374adce687 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.651 189613 DEBUG nova.compute.manager [req-61d9d337-63a7-4345-b516-af7e8fd72bfe req-9295a43c-95bd-47af-97ae-6c374adce687 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] No waiting events found dispatching network-vif-unplugged-2301b73c-6b2a-4a4b-afa2-7d6aa710652b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.652 189613 DEBUG nova.compute.manager [req-61d9d337-63a7-4345-b516-af7e8fd72bfe req-9295a43c-95bd-47af-97ae-6c374adce687 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Received event network-vif-unplugged-2301b73c-6b2a-4a4b-afa2-7d6aa710652b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.664 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.664 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:17.666 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.703 189613 INFO nova.compute.manager [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Took 0.40 seconds to destroy the instance on the hypervisor.
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.704 189613 DEBUG oslo.service.loopingcall [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.705 189613 DEBUG nova.compute.manager [-] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:19:17 compute-0 nova_compute[189608]: 2025-11-24 22:19:17.705 189613 DEBUG nova.network.neutron [-] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:19:17 compute-0 rsyslogd[237036]: message too long (8192) with configured size 8096, begin of message is: 2025-11-24 22:19:17.622 189613 DEBUG nova.virt.libvirt.vif [None req-410eb1b4-be [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 24 22:19:18 compute-0 nova_compute[189608]: 2025-11-24 22:19:18.581 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:18 compute-0 nova_compute[189608]: 2025-11-24 22:19:18.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:19:18 compute-0 nova_compute[189608]: 2025-11-24 22:19:18.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:19:18 compute-0 nova_compute[189608]: 2025-11-24 22:19:18.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:19:18 compute-0 nova_compute[189608]: 2025-11-24 22:19:18.816 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 22:19:19 compute-0 nova_compute[189608]: 2025-11-24 22:19:19.513 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:19:19 compute-0 nova_compute[189608]: 2025-11-24 22:19:19.514 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:19:19 compute-0 nova_compute[189608]: 2025-11-24 22:19:19.514 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:19:19 compute-0 nova_compute[189608]: 2025-11-24 22:19:19.515 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:19:19 compute-0 nova_compute[189608]: 2025-11-24 22:19:19.563 189613 DEBUG nova.network.neutron [req-03fb25e1-d300-4146-9514-a7cea8f8261e req-f7992b20-628f-4522-a006-3cc143fa13ce c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Updated VIF entry in instance network info cache for port 2301b73c-6b2a-4a4b-afa2-7d6aa710652b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:19:19 compute-0 nova_compute[189608]: 2025-11-24 22:19:19.564 189613 DEBUG nova.network.neutron [req-03fb25e1-d300-4146-9514-a7cea8f8261e req-f7992b20-628f-4522-a006-3cc143fa13ce c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Updating instance_info_cache with network_info: [{"id": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "address": "fa:16:3e:04:76:e6", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2301b73c-6b", "ovs_interfaceid": "2301b73c-6b2a-4a4b-afa2-7d6aa710652b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:19:19 compute-0 nova_compute[189608]: 2025-11-24 22:19:19.581 189613 DEBUG oslo_concurrency.lockutils [req-03fb25e1-d300-4146-9514-a7cea8f8261e req-f7992b20-628f-4522-a006-3cc143fa13ce c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-672e3ced-b18a-4ce7-aace-eb5c076ddb88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:19:19 compute-0 nova_compute[189608]: 2025-11-24 22:19:19.759 189613 DEBUG nova.compute.manager [req-08f3eafa-dfb0-4a00-aa39-5598484d861a req-6be41fce-6670-4731-8935-99a440921228 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Received event network-vif-plugged-2301b73c-6b2a-4a4b-afa2-7d6aa710652b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:19:19 compute-0 nova_compute[189608]: 2025-11-24 22:19:19.760 189613 DEBUG oslo_concurrency.lockutils [req-08f3eafa-dfb0-4a00-aa39-5598484d861a req-6be41fce-6670-4731-8935-99a440921228 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:19:19 compute-0 nova_compute[189608]: 2025-11-24 22:19:19.761 189613 DEBUG oslo_concurrency.lockutils [req-08f3eafa-dfb0-4a00-aa39-5598484d861a req-6be41fce-6670-4731-8935-99a440921228 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:19:19 compute-0 nova_compute[189608]: 2025-11-24 22:19:19.762 189613 DEBUG oslo_concurrency.lockutils [req-08f3eafa-dfb0-4a00-aa39-5598484d861a req-6be41fce-6670-4731-8935-99a440921228 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:19:19 compute-0 nova_compute[189608]: 2025-11-24 22:19:19.762 189613 DEBUG nova.compute.manager [req-08f3eafa-dfb0-4a00-aa39-5598484d861a req-6be41fce-6670-4731-8935-99a440921228 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] No waiting events found dispatching network-vif-plugged-2301b73c-6b2a-4a4b-afa2-7d6aa710652b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:19:19 compute-0 nova_compute[189608]: 2025-11-24 22:19:19.763 189613 WARNING nova.compute.manager [req-08f3eafa-dfb0-4a00-aa39-5598484d861a req-6be41fce-6670-4731-8935-99a440921228 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Received unexpected event network-vif-plugged-2301b73c-6b2a-4a4b-afa2-7d6aa710652b for instance with vm_state active and task_state deleting.
Nov 24 22:19:20 compute-0 nova_compute[189608]: 2025-11-24 22:19:20.200 189613 DEBUG nova.network.neutron [-] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:19:20 compute-0 nova_compute[189608]: 2025-11-24 22:19:20.218 189613 INFO nova.compute.manager [-] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Took 2.51 seconds to deallocate network for instance.
Nov 24 22:19:20 compute-0 nova_compute[189608]: 2025-11-24 22:19:20.266 189613 DEBUG oslo_concurrency.lockutils [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:19:20 compute-0 nova_compute[189608]: 2025-11-24 22:19:20.266 189613 DEBUG oslo_concurrency.lockutils [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:19:20 compute-0 nova_compute[189608]: 2025-11-24 22:19:20.425 189613 DEBUG nova.compute.provider_tree [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:19:20 compute-0 nova_compute[189608]: 2025-11-24 22:19:20.437 189613 DEBUG nova.scheduler.client.report [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:19:20 compute-0 nova_compute[189608]: 2025-11-24 22:19:20.451 189613 DEBUG oslo_concurrency.lockutils [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:19:20 compute-0 nova_compute[189608]: 2025-11-24 22:19:20.522 189613 INFO nova.scheduler.client.report [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Deleted allocations for instance 672e3ced-b18a-4ce7-aace-eb5c076ddb88
Nov 24 22:19:20 compute-0 nova_compute[189608]: 2025-11-24 22:19:20.597 189613 DEBUG oslo_concurrency.lockutils [None req-410eb1b4-be4b-44da-8c43-b200f3703e75 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "672e3ced-b18a-4ce7-aace-eb5c076ddb88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:19:21 compute-0 nova_compute[189608]: 2025-11-24 22:19:21.013 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updating instance_info_cache with network_info: [{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:19:21 compute-0 nova_compute[189608]: 2025-11-24 22:19:21.042 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:19:21 compute-0 nova_compute[189608]: 2025-11-24 22:19:21.043 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:19:22 compute-0 podman[246234]: 2025-11-24 22:19:22.535184258 +0000 UTC m=+0.089334876 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:19:22 compute-0 nova_compute[189608]: 2025-11-24 22:19:22.632 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:23 compute-0 nova_compute[189608]: 2025-11-24 22:19:23.584 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:24.669 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:19:25 compute-0 nova_compute[189608]: 2025-11-24 22:19:25.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:19:25 compute-0 nova_compute[189608]: 2025-11-24 22:19:25.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:19:25 compute-0 nova_compute[189608]: 2025-11-24 22:19:25.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:19:25 compute-0 nova_compute[189608]: 2025-11-24 22:19:25.821 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:19:25 compute-0 nova_compute[189608]: 2025-11-24 22:19:25.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:19:25 compute-0 nova_compute[189608]: 2025-11-24 22:19:25.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:19:25 compute-0 nova_compute[189608]: 2025-11-24 22:19:25.823 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:19:25 compute-0 nova_compute[189608]: 2025-11-24 22:19:25.960 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:19:26 compute-0 podman[246256]: 2025-11-24 22:19:26.047964869 +0000 UTC m=+0.140097378 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.081 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.083 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.177 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.179 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.278 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.281 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.360 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.375 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.434 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.436 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.538 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.540 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.619 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.621 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:19:26 compute-0 nova_compute[189608]: 2025-11-24 22:19:26.722 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:19:27 compute-0 nova_compute[189608]: 2025-11-24 22:19:27.211 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:19:27 compute-0 nova_compute[189608]: 2025-11-24 22:19:27.213 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4927MB free_disk=72.18348693847656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:19:27 compute-0 nova_compute[189608]: 2025-11-24 22:19:27.213 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:19:27 compute-0 nova_compute[189608]: 2025-11-24 22:19:27.214 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:19:27 compute-0 nova_compute[189608]: 2025-11-24 22:19:27.312 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:19:27 compute-0 nova_compute[189608]: 2025-11-24 22:19:27.313 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 7e7d375c-a42c-41c5-934f-c46941a40067 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:19:27 compute-0 nova_compute[189608]: 2025-11-24 22:19:27.314 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:19:27 compute-0 nova_compute[189608]: 2025-11-24 22:19:27.314 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:19:27 compute-0 nova_compute[189608]: 2025-11-24 22:19:27.404 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:19:27 compute-0 nova_compute[189608]: 2025-11-24 22:19:27.420 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:19:27 compute-0 nova_compute[189608]: 2025-11-24 22:19:27.444 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:19:27 compute-0 nova_compute[189608]: 2025-11-24 22:19:27.445 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:19:27 compute-0 nova_compute[189608]: 2025-11-24 22:19:27.637 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:28 compute-0 nova_compute[189608]: 2025-11-24 22:19:28.446 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:19:28 compute-0 nova_compute[189608]: 2025-11-24 22:19:28.447 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:19:28 compute-0 nova_compute[189608]: 2025-11-24 22:19:28.588 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:29 compute-0 podman[203795]: time="2025-11-24T22:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:19:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:19:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 24 22:19:30 compute-0 nova_compute[189608]: 2025-11-24 22:19:30.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:19:30 compute-0 nova_compute[189608]: 2025-11-24 22:19:30.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:19:30 compute-0 nova_compute[189608]: 2025-11-24 22:19:30.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:19:30 compute-0 nova_compute[189608]: 2025-11-24 22:19:30.795 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:19:31 compute-0 openstack_network_exporter[205945]: ERROR   22:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:19:31 compute-0 openstack_network_exporter[205945]: ERROR   22:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:19:31 compute-0 openstack_network_exporter[205945]: ERROR   22:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:19:31 compute-0 openstack_network_exporter[205945]: ERROR   22:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:19:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:19:31 compute-0 openstack_network_exporter[205945]: ERROR   22:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:19:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:19:31 compute-0 podman[246301]: 2025-11-24 22:19:31.596546579 +0000 UTC m=+0.129842718 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Nov 24 22:19:31 compute-0 podman[246300]: 2025-11-24 22:19:31.60491619 +0000 UTC m=+0.146688183 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible)
Nov 24 22:19:31 compute-0 podman[246302]: 2025-11-24 22:19:31.605096425 +0000 UTC m=+0.131864891 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 24 22:19:32 compute-0 nova_compute[189608]: 2025-11-24 22:19:32.606 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764022757.6030493, 672e3ced-b18a-4ce7-aace-eb5c076ddb88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:19:32 compute-0 nova_compute[189608]: 2025-11-24 22:19:32.607 189613 INFO nova.compute.manager [-] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] VM Stopped (Lifecycle Event)
Nov 24 22:19:32 compute-0 nova_compute[189608]: 2025-11-24 22:19:32.640 189613 DEBUG nova.compute.manager [None req-9fb87bc8-3817-4e1e-97c4-98941ae97c6d - - - - - -] [instance: 672e3ced-b18a-4ce7-aace-eb5c076ddb88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:19:32 compute-0 nova_compute[189608]: 2025-11-24 22:19:32.641 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:33 compute-0 nova_compute[189608]: 2025-11-24 22:19:33.592 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:36 compute-0 podman[246359]: 2025-11-24 22:19:36.552995488 +0000 UTC m=+0.100884455 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:19:37 compute-0 nova_compute[189608]: 2025-11-24 22:19:37.645 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:38 compute-0 nova_compute[189608]: 2025-11-24 22:19:38.595 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:39 compute-0 podman[246384]: 2025-11-24 22:19:39.592494625 +0000 UTC m=+0.126260317 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 22:19:39 compute-0 podman[246383]: 2025-11-24 22:19:39.630967384 +0000 UTC m=+0.177578156 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Nov 24 22:19:42 compute-0 nova_compute[189608]: 2025-11-24 22:19:42.648 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:43 compute-0 nova_compute[189608]: 2025-11-24 22:19:43.597 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:46 compute-0 podman[246427]: 2025-11-24 22:19:46.582126831 +0000 UTC m=+0.123446159 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:19:47 compute-0 nova_compute[189608]: 2025-11-24 22:19:47.652 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:48 compute-0 nova_compute[189608]: 2025-11-24 22:19:48.601 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:49 compute-0 sshd-session[246453]: Accepted publickey for zuul from 38.102.83.200 port 42786 ssh2: RSA SHA256:s+FA6JzLhmwu0B38HrAw2rBq4K/LlOWNM3GY6zwZaMs
Nov 24 22:19:49 compute-0 systemd-logind[806]: New session 30 of user zuul.
Nov 24 22:19:49 compute-0 systemd[1]: Started Session 30 of User zuul.
Nov 24 22:19:49 compute-0 sshd-session[246453]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 22:19:51 compute-0 sudo[246630]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmlhuqvuzcqatupprkzbdkmrkwowvlfa ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764022790.1560264-59410-57223431830459/AnsiballZ_command.py'
Nov 24 22:19:51 compute-0 sudo[246630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:19:51 compute-0 python3[246632]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 22:19:51 compute-0 sudo[246630]: pam_unix(sudo:session): session closed for user root
Nov 24 22:19:52 compute-0 ovn_controller[97889]: 2025-11-24T22:19:52Z|00057|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 24 22:19:52 compute-0 nova_compute[189608]: 2025-11-24 22:19:52.655 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:53 compute-0 podman[246672]: 2025-11-24 22:19:53.556142238 +0000 UTC m=+0.101129553 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 22:19:53 compute-0 nova_compute[189608]: 2025-11-24 22:19:53.604 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:54.579 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:19:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:54.580 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:19:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:19:54.581 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:19:56 compute-0 podman[246693]: 2025-11-24 22:19:56.545820003 +0000 UTC m=+0.093894107 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 24 22:19:57 compute-0 nova_compute[189608]: 2025-11-24 22:19:57.659 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:58 compute-0 nova_compute[189608]: 2025-11-24 22:19:58.607 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:19:59 compute-0 podman[203795]: time="2025-11-24T22:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:19:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:19:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 24 22:20:01 compute-0 openstack_network_exporter[205945]: ERROR   22:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:20:01 compute-0 openstack_network_exporter[205945]: ERROR   22:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:20:01 compute-0 openstack_network_exporter[205945]: ERROR   22:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:20:01 compute-0 openstack_network_exporter[205945]: ERROR   22:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:20:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:20:01 compute-0 openstack_network_exporter[205945]: ERROR   22:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:20:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:20:02 compute-0 podman[246712]: 2025-11-24 22:20:02.54413226 +0000 UTC m=+0.095125386 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc.)
Nov 24 22:20:02 compute-0 podman[246711]: 2025-11-24 22:20:02.56178767 +0000 UTC m=+0.115934794 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=)
Nov 24 22:20:02 compute-0 podman[246713]: 2025-11-24 22:20:02.582585069 +0000 UTC m=+0.119314200 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS)
Nov 24 22:20:02 compute-0 nova_compute[189608]: 2025-11-24 22:20:02.662 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:03 compute-0 nova_compute[189608]: 2025-11-24 22:20:03.610 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:07 compute-0 podman[246768]: 2025-11-24 22:20:07.570005026 +0000 UTC m=+0.116504013 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:20:07 compute-0 nova_compute[189608]: 2025-11-24 22:20:07.664 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:07 compute-0 nova_compute[189608]: 2025-11-24 22:20:07.927 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "91f0c72b-d856-4250-89de-f420d598e74a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:07 compute-0 nova_compute[189608]: 2025-11-24 22:20:07.928 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "91f0c72b-d856-4250-89de-f420d598e74a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:07 compute-0 nova_compute[189608]: 2025-11-24 22:20:07.950 189613 DEBUG nova.compute.manager [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.053 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.054 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.069 189613 DEBUG nova.virt.hardware [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.071 189613 INFO nova.compute.claims [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.283 189613 DEBUG nova.compute.provider_tree [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.303 189613 DEBUG nova.scheduler.client.report [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.326 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.328 189613 DEBUG nova.compute.manager [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.387 189613 DEBUG nova.compute.manager [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.402 189613 INFO nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.438 189613 DEBUG nova.compute.manager [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.531 189613 DEBUG nova.compute.manager [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.533 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.534 189613 INFO nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Creating image(s)
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.535 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "/var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.536 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.537 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.538 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "0c781545786ba0e2b6d5b36227eef817e147d42c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.539 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "0c781545786ba0e2b6d5b36227eef817e147d42c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:08 compute-0 nova_compute[189608]: 2025-11-24 22:20:08.614 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:09 compute-0 nova_compute[189608]: 2025-11-24 22:20:09.782 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:09 compute-0 nova_compute[189608]: 2025-11-24 22:20:09.875 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c.part --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:09 compute-0 nova_compute[189608]: 2025-11-24 22:20:09.877 189613 DEBUG nova.virt.images [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] 33556189-34d3-4ca9-8b50-7fa572df6d66 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 24 22:20:09 compute-0 nova_compute[189608]: 2025-11-24 22:20:09.879 189613 DEBUG nova.privsep.utils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 24 22:20:09 compute-0 nova_compute[189608]: 2025-11-24 22:20:09.879 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c.part /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.091 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c.part /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c.converted" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.096 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.166 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c.converted --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.167 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "0c781545786ba0e2b6d5b36227eef817e147d42c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.182 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.256 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.258 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "0c781545786ba0e2b6d5b36227eef817e147d42c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.259 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "0c781545786ba0e2b6d5b36227eef817e147d42c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.271 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.325 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.327 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c,backing_fmt=raw /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.374 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c,backing_fmt=raw /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.375 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "0c781545786ba0e2b6d5b36227eef817e147d42c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.375 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.454 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.454 189613 DEBUG nova.virt.disk.api [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Checking if we can resize image /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.455 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.517 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.519 189613 DEBUG nova.virt.disk.api [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Cannot resize image /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.520 189613 DEBUG nova.objects.instance [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'migration_context' on Instance uuid 91f0c72b-d856-4250-89de-f420d598e74a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.534 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "/var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.535 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.535 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "/var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.548 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:10 compute-0 podman[246818]: 2025-11-24 22:20:10.578087833 +0000 UTC m=+0.114606993 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.613 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.614 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.615 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:10 compute-0 podman[246817]: 2025-11-24 22:20:10.624718606 +0000 UTC m=+0.179554477 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.628 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.710 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.711 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.747 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.eph0 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.747 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.748 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.810 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.812 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.813 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Ensure instance console log exists: /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.814 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.814 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.815 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.820 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-24T22:19:55Z,direct_url=<?>,disk_format='qcow2',id=33556189-34d3-4ca9-8b50-7fa572df6d66,min_disk=0,min_ram=0,name='fvt_testing_image',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-24T22:20:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': '33556189-34d3-4ca9-8b50-7fa572df6d66'}], 'ephemerals': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 1, 'encryption_format': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.829 189613 WARNING nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.841 189613 DEBUG nova.virt.libvirt.host [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.842 189613 DEBUG nova.virt.libvirt.host [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.847 189613 DEBUG nova.virt.libvirt.host [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.848 189613 DEBUG nova.virt.libvirt.host [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.848 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.849 189613 DEBUG nova.virt.hardware [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:20:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='95857b68-e5b4-49b4-a92f-efec0e7d6225',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-24T22:19:55Z,direct_url=<?>,disk_format='qcow2',id=33556189-34d3-4ca9-8b50-7fa572df6d66,min_disk=0,min_ram=0,name='fvt_testing_image',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-24T22:20:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.850 189613 DEBUG nova.virt.hardware [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.850 189613 DEBUG nova.virt.hardware [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.851 189613 DEBUG nova.virt.hardware [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.851 189613 DEBUG nova.virt.hardware [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.852 189613 DEBUG nova.virt.hardware [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.852 189613 DEBUG nova.virt.hardware [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.853 189613 DEBUG nova.virt.hardware [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.853 189613 DEBUG nova.virt.hardware [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.854 189613 DEBUG nova.virt.hardware [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.854 189613 DEBUG nova.virt.hardware [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.862 189613 DEBUG nova.objects.instance [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91f0c72b-d856-4250-89de-f420d598e74a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.879 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:20:10 compute-0 nova_compute[189608]:   <uuid>91f0c72b-d856-4250-89de-f420d598e74a</uuid>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   <name>instance-00000005</name>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   <memory>524288</memory>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <nova:name>fvt_testing_server</nova:name>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:20:10</nova:creationTime>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <nova:flavor name="fvt_testing_flavor">
Nov 24 22:20:10 compute-0 nova_compute[189608]:         <nova:memory>512</nova:memory>
Nov 24 22:20:10 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:20:10 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:20:10 compute-0 nova_compute[189608]:         <nova:ephemeral>1</nova:ephemeral>
Nov 24 22:20:10 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:20:10 compute-0 nova_compute[189608]:         <nova:user uuid="572aaac113f54af8a894707849aed6bf">admin</nova:user>
Nov 24 22:20:10 compute-0 nova_compute[189608]:         <nova:project uuid="309342b7e3e849b2a5dd56651d8fa068">admin</nova:project>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="33556189-34d3-4ca9-8b50-7fa572df6d66"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <nova:ports/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <system>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <entry name="serial">91f0c72b-d856-4250-89de-f420d598e74a</entry>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <entry name="uuid">91f0c72b-d856-4250-89de-f420d598e74a</entry>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     </system>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   <os>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   </os>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   <features>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   </features>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.eph0"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <target dev="vdb" bus="virtio"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.config"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/console.log" append="off"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <video>
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     </video>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:20:10 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:20:10 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:20:10 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:20:10 compute-0 nova_compute[189608]: </domain>
Nov 24 22:20:10 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.934 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.934 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.934 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:20:10 compute-0 nova_compute[189608]: 2025-11-24 22:20:10.935 189613 INFO nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Using config drive
Nov 24 22:20:11 compute-0 nova_compute[189608]: 2025-11-24 22:20:11.569 189613 INFO nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Creating config drive at /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.config
Nov 24 22:20:11 compute-0 nova_compute[189608]: 2025-11-24 22:20:11.580 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53rxay_a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:11 compute-0 nova_compute[189608]: 2025-11-24 22:20:11.706 189613 DEBUG oslo_concurrency.processutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53rxay_a" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:11 compute-0 systemd-machined[155884]: New machine qemu-5-instance-00000005.
Nov 24 22:20:11 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.667 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.683 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764022812.6828532, 91f0c72b-d856-4250-89de-f420d598e74a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.684 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] VM Resumed (Lifecycle Event)
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.689 189613 DEBUG nova.compute.manager [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.690 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.697 189613 INFO nova.virt.libvirt.driver [-] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Instance spawned successfully.
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.698 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.722 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.737 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.745 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.745 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.746 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.747 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.748 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.748 189613 DEBUG nova.virt.libvirt.driver [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.754 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.755 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764022812.6890583, 91f0c72b-d856-4250-89de-f420d598e74a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.755 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] VM Started (Lifecycle Event)
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.777 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.785 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.806 189613 INFO nova.compute.manager [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Took 4.27 seconds to spawn the instance on the hypervisor.
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.806 189613 DEBUG nova.compute.manager [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.809 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.884 189613 INFO nova.compute.manager [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Took 4.87 seconds to build instance.
Nov 24 22:20:12 compute-0 nova_compute[189608]: 2025-11-24 22:20:12.899 189613 DEBUG oslo_concurrency.lockutils [None req-90ae2c54-ed76-4d50-9d69-493498a39b64 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "91f0c72b-d856-4250-89de-f420d598e74a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:13 compute-0 nova_compute[189608]: 2025-11-24 22:20:13.617 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:13 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 24 22:20:13 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 24 22:20:17 compute-0 podman[246922]: 2025-11-24 22:20:17.571021324 +0000 UTC m=+0.112889169 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.625 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.626 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.627 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.633 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.634 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.635 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 91f0c72b-d856-4250-89de-f420d598e74a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.636 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/91f0c72b-d856-4250-89de-f420d598e74a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}64060fdce0de69c84bc4ec1fbe9dbdfbc4dd57ffec34862b3445d9841d3967a2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:20:17 compute-0 nova_compute[189608]: 2025-11-24 22:20:17.673 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.302 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1572 Content-Type: application/json Date: Mon, 24 Nov 2025 22:20:17 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2ac944e8-c353-4237-b46d-26b46d8ac114 x-openstack-request-id: req-2ac944e8-c353-4237-b46d-26b46d8ac114 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.302 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "91f0c72b-d856-4250-89de-f420d598e74a", "name": "fvt_testing_server", "status": "ACTIVE", "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "user_id": "572aaac113f54af8a894707849aed6bf", "metadata": {}, "hostId": "138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a", "image": {"id": "33556189-34d3-4ca9-8b50-7fa572df6d66", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/33556189-34d3-4ca9-8b50-7fa572df6d66"}]}, "flavor": {"id": "95857b68-e5b4-49b4-a92f-efec0e7d6225", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/95857b68-e5b4-49b4-a92f-efec0e7d6225"}]}, "created": "2025-11-24T22:20:06Z", "updated": "2025-11-24T22:20:12Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/91f0c72b-d856-4250-89de-f420d598e74a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/91f0c72b-d856-4250-89de-f420d598e74a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-24T22:20:12.000000", "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.302 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/91f0c72b-d856-4250-89de-f420d598e74a used request id req-2ac944e8-c353-4237-b46d-26b46d8ac114 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.303 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '91f0c72b-d856-4250-89de-f420d598e74a', 'name': 'fvt_testing_server', 'flavor': {'id': '95857b68-e5b4-49b4-a92f-efec0e7d6225', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '33556189-34d3-4ca9-8b50-7fa572df6d66'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.306 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7e7d375c-a42c-41c5-934f-c46941a40067', 'name': 'vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.310 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ea741b45-c6b4-41c0-a70f-c752b616faa2', 'name': 'test_0', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.310 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:20:18.310779) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.321 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.328 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.329 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.329 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.329 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.330 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.330 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:20:18.329891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.330 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.332 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:20:18.332612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.370 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.370 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.371 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.409 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.410 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.410 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.448 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.448 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.449 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.450 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.451 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.451 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.451 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.451 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.451 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.452 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:20:18.451801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.552 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.553 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.553 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 nova_compute[189608]: 2025-11-24 22:20:18.620 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.675 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.676 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.676 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.754 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.755 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.755 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.756 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.756 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.756 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.756 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.756 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.read.latency volume: 1027891593 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.757 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.757 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.read.latency volume: 2915361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.757 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:20:18.756744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.758 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 742675991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.758 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 148600369 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.758 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 107984847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.759 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 667223647 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.759 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 118019506 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.760 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 84934831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.761 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.761 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.761 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.762 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.762 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.762 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.762 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:20:18.762497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.791 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/cpu volume: 5940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.816 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/cpu volume: 40000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.841 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/cpu volume: 46090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.841 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.842 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.842 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.842 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.842 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.842 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.843 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.843 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.843 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.844 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.844 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:20:18.842596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.844 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.844 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.844 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.845 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.845 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.846 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.846 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.846 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.846 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.846 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.847 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.847 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.847 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:20:18.846690) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.847 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.848 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.848 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.848 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.849 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.849 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.850 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.850 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.850 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.851 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:20:18.850433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.851 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.851 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.851 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.852 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.852 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.852 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.852 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.853 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.853 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.854 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.854 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:20:18.854186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.854 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.854 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.855 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.855 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 2394094265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.855 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 18006599 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.856 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.856 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 2026985911 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.856 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 25037348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.857 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.857 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.857 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.858 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.858 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.858 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.858 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.858 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.858 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.859 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.859 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:20:18.858197) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.859 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.860 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.860 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.860 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.861 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.861 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.861 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.861 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.861 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.861 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.862 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:20:18.861838) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.862 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.862 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.863 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.863 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.863 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.863 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.864 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:20:18.863719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.864 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.864 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.865 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.866 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:20:18.865267) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.866 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.866 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.866 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.867 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.867 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.867 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.867 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.867 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.868 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.869 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.869 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.869 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-24T22:20:18.866740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.869 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:20:18.867959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.869 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.869 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.870 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:20:18.869548) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.870 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.870 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.871 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.871 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.871 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.871 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.871 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.871 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:20:18.871245) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.872 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.872 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.872 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.872 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.872 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.873 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.873 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.873 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:20:18.872901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.873 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.874 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.874 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.874 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.874 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.874 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.874 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.874 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.875 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.875 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.876 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:20:18.874542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.876 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.876 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.876 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.876 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.876 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.876 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.877 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:20:18.876301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.877 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.877 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.877 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.877 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.877 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.877 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.878 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:20:18.877878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.878 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.878 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 91f0c72b-d856-4250-89de-f420d598e74a: ceilometer.compute.pollsters.NoVolumeException
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.878 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.878 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.879 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.879 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.879 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.879 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.879 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.879 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.880 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.880 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.880 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.880 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.880 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.880 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.880 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.881 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-24T22:20:18.879599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:20:18.880799) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.881 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.881 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.882 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.882 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.882 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.882 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.882 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.882 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.882 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:20:18.882277) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.882 14 DEBUG ceilometer.compute.pollsters [-] 91f0c72b-d856-4250-89de-f420d598e74a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.883 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.883 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.883 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.883 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.883 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.884 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.884 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.884 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.884 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.884 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.884 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.884 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.884 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.885 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.885 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.885 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:20:18.884887) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:20:18.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:20:19 compute-0 nova_compute[189608]: 2025-11-24 22:20:19.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:20:19 compute-0 nova_compute[189608]: 2025-11-24 22:20:19.795 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:20:20 compute-0 nova_compute[189608]: 2025-11-24 22:20:20.007 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:20:20 compute-0 nova_compute[189608]: 2025-11-24 22:20:20.007 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:20:20 compute-0 nova_compute[189608]: 2025-11-24 22:20:20.008 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:20:21 compute-0 nova_compute[189608]: 2025-11-24 22:20:21.688 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Updating instance_info_cache with network_info: [{"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:20:21 compute-0 nova_compute[189608]: 2025-11-24 22:20:21.705 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:20:21 compute-0 nova_compute[189608]: 2025-11-24 22:20:21.706 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:20:22 compute-0 nova_compute[189608]: 2025-11-24 22:20:22.677 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:23 compute-0 nova_compute[189608]: 2025-11-24 22:20:23.623 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:24 compute-0 podman[246948]: 2025-11-24 22:20:24.582174354 +0000 UTC m=+0.128686373 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 22:20:25 compute-0 nova_compute[189608]: 2025-11-24 22:20:25.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:20:25 compute-0 nova_compute[189608]: 2025-11-24 22:20:25.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:20:25 compute-0 nova_compute[189608]: 2025-11-24 22:20:25.821 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:25 compute-0 nova_compute[189608]: 2025-11-24 22:20:25.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:25 compute-0 nova_compute[189608]: 2025-11-24 22:20:25.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:25 compute-0 nova_compute[189608]: 2025-11-24 22:20:25.823 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:20:25 compute-0 nova_compute[189608]: 2025-11-24 22:20:25.936 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.017 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.019 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.078 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.082 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.185 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.eph0 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.188 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.290 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a/disk.eph0 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.302 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.400 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.403 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.484 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.486 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.581 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.583 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.685 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.699 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.788 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.789 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.888 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.890 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.954 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:26 compute-0 nova_compute[189608]: 2025-11-24 22:20:26.955 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:20:27 compute-0 nova_compute[189608]: 2025-11-24 22:20:27.010 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:20:27 compute-0 podman[247003]: 2025-11-24 22:20:27.568776801 +0000 UTC m=+0.131848101 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible)
Nov 24 22:20:27 compute-0 nova_compute[189608]: 2025-11-24 22:20:27.588 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:20:27 compute-0 nova_compute[189608]: 2025-11-24 22:20:27.590 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4780MB free_disk=72.15514373779297GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:20:27 compute-0 nova_compute[189608]: 2025-11-24 22:20:27.590 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:27 compute-0 nova_compute[189608]: 2025-11-24 22:20:27.591 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:27 compute-0 nova_compute[189608]: 2025-11-24 22:20:27.680 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:27 compute-0 nova_compute[189608]: 2025-11-24 22:20:27.815 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:20:27 compute-0 nova_compute[189608]: 2025-11-24 22:20:27.816 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 7e7d375c-a42c-41c5-934f-c46941a40067 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:20:27 compute-0 nova_compute[189608]: 2025-11-24 22:20:27.817 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 91f0c72b-d856-4250-89de-f420d598e74a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:20:27 compute-0 nova_compute[189608]: 2025-11-24 22:20:27.817 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:20:27 compute-0 nova_compute[189608]: 2025-11-24 22:20:27.818 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:20:27 compute-0 nova_compute[189608]: 2025-11-24 22:20:27.924 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing inventories for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 22:20:28 compute-0 nova_compute[189608]: 2025-11-24 22:20:28.027 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating ProviderTree inventory for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 22:20:28 compute-0 nova_compute[189608]: 2025-11-24 22:20:28.028 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating inventory in ProviderTree for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 22:20:28 compute-0 nova_compute[189608]: 2025-11-24 22:20:28.043 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing aggregate associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 22:20:28 compute-0 nova_compute[189608]: 2025-11-24 22:20:28.062 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing trait associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, traits: HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 22:20:28 compute-0 nova_compute[189608]: 2025-11-24 22:20:28.137 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:20:28 compute-0 nova_compute[189608]: 2025-11-24 22:20:28.156 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:20:28 compute-0 nova_compute[189608]: 2025-11-24 22:20:28.174 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:20:28 compute-0 nova_compute[189608]: 2025-11-24 22:20:28.175 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:28 compute-0 nova_compute[189608]: 2025-11-24 22:20:28.175 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:20:28 compute-0 nova_compute[189608]: 2025-11-24 22:20:28.176 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 22:20:28 compute-0 nova_compute[189608]: 2025-11-24 22:20:28.627 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:29 compute-0 podman[203795]: time="2025-11-24T22:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:20:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:20:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 24 22:20:30 compute-0 nova_compute[189608]: 2025-11-24 22:20:30.189 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:20:30 compute-0 nova_compute[189608]: 2025-11-24 22:20:30.190 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:20:30 compute-0 nova_compute[189608]: 2025-11-24 22:20:30.191 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:20:30 compute-0 nova_compute[189608]: 2025-11-24 22:20:30.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:20:31 compute-0 openstack_network_exporter[205945]: ERROR   22:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:20:31 compute-0 openstack_network_exporter[205945]: ERROR   22:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:20:31 compute-0 openstack_network_exporter[205945]: ERROR   22:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:20:31 compute-0 openstack_network_exporter[205945]: ERROR   22:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:20:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:20:31 compute-0 openstack_network_exporter[205945]: ERROR   22:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:20:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:20:31 compute-0 nova_compute[189608]: 2025-11-24 22:20:31.698 189613 DEBUG oslo_concurrency.lockutils [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "91f0c72b-d856-4250-89de-f420d598e74a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:31 compute-0 nova_compute[189608]: 2025-11-24 22:20:31.701 189613 DEBUG oslo_concurrency.lockutils [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "91f0c72b-d856-4250-89de-f420d598e74a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:31 compute-0 nova_compute[189608]: 2025-11-24 22:20:31.703 189613 DEBUG oslo_concurrency.lockutils [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "91f0c72b-d856-4250-89de-f420d598e74a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:31 compute-0 nova_compute[189608]: 2025-11-24 22:20:31.703 189613 DEBUG oslo_concurrency.lockutils [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "91f0c72b-d856-4250-89de-f420d598e74a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:31 compute-0 nova_compute[189608]: 2025-11-24 22:20:31.704 189613 DEBUG oslo_concurrency.lockutils [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "91f0c72b-d856-4250-89de-f420d598e74a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:31 compute-0 nova_compute[189608]: 2025-11-24 22:20:31.707 189613 INFO nova.compute.manager [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Terminating instance
Nov 24 22:20:31 compute-0 nova_compute[189608]: 2025-11-24 22:20:31.709 189613 DEBUG oslo_concurrency.lockutils [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "refresh_cache-91f0c72b-d856-4250-89de-f420d598e74a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:20:31 compute-0 nova_compute[189608]: 2025-11-24 22:20:31.710 189613 DEBUG oslo_concurrency.lockutils [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquired lock "refresh_cache-91f0c72b-d856-4250-89de-f420d598e74a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:20:31 compute-0 nova_compute[189608]: 2025-11-24 22:20:31.711 189613 DEBUG nova.network.neutron [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:20:31 compute-0 nova_compute[189608]: 2025-11-24 22:20:31.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:20:31 compute-0 nova_compute[189608]: 2025-11-24 22:20:31.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:20:31 compute-0 nova_compute[189608]: 2025-11-24 22:20:31.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:20:31 compute-0 nova_compute[189608]: 2025-11-24 22:20:31.888 189613 DEBUG nova.network.neutron [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:20:32 compute-0 nova_compute[189608]: 2025-11-24 22:20:32.242 189613 DEBUG nova.network.neutron [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:20:32 compute-0 nova_compute[189608]: 2025-11-24 22:20:32.255 189613 DEBUG oslo_concurrency.lockutils [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Releasing lock "refresh_cache-91f0c72b-d856-4250-89de-f420d598e74a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:20:32 compute-0 nova_compute[189608]: 2025-11-24 22:20:32.256 189613 DEBUG nova.compute.manager [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:20:32 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 24 22:20:32 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 20.844s CPU time.
Nov 24 22:20:32 compute-0 systemd-machined[155884]: Machine qemu-5-instance-00000005 terminated.
Nov 24 22:20:32 compute-0 nova_compute[189608]: 2025-11-24 22:20:32.558 189613 INFO nova.virt.libvirt.driver [-] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Instance destroyed successfully.
Nov 24 22:20:32 compute-0 nova_compute[189608]: 2025-11-24 22:20:32.558 189613 DEBUG nova.objects.instance [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'resources' on Instance uuid 91f0c72b-d856-4250-89de-f420d598e74a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:20:32 compute-0 nova_compute[189608]: 2025-11-24 22:20:32.576 189613 INFO nova.virt.libvirt.driver [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Deleting instance files /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a_del
Nov 24 22:20:32 compute-0 nova_compute[189608]: 2025-11-24 22:20:32.577 189613 INFO nova.virt.libvirt.driver [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Deletion of /var/lib/nova/instances/91f0c72b-d856-4250-89de-f420d598e74a_del complete
Nov 24 22:20:32 compute-0 nova_compute[189608]: 2025-11-24 22:20:32.651 189613 INFO nova.compute.manager [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Took 0.39 seconds to destroy the instance on the hypervisor.
Nov 24 22:20:32 compute-0 nova_compute[189608]: 2025-11-24 22:20:32.652 189613 DEBUG oslo.service.loopingcall [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:20:32 compute-0 nova_compute[189608]: 2025-11-24 22:20:32.652 189613 DEBUG nova.compute.manager [-] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:20:32 compute-0 nova_compute[189608]: 2025-11-24 22:20:32.653 189613 DEBUG nova.network.neutron [-] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:20:32 compute-0 nova_compute[189608]: 2025-11-24 22:20:32.683 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:33 compute-0 nova_compute[189608]: 2025-11-24 22:20:33.470 189613 DEBUG nova.network.neutron [-] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:20:33 compute-0 nova_compute[189608]: 2025-11-24 22:20:33.486 189613 DEBUG nova.network.neutron [-] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:20:33 compute-0 nova_compute[189608]: 2025-11-24 22:20:33.506 189613 INFO nova.compute.manager [-] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Took 0.85 seconds to deallocate network for instance.
Nov 24 22:20:33 compute-0 nova_compute[189608]: 2025-11-24 22:20:33.573 189613 DEBUG oslo_concurrency.lockutils [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:33 compute-0 nova_compute[189608]: 2025-11-24 22:20:33.574 189613 DEBUG oslo_concurrency.lockutils [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:33 compute-0 podman[247036]: 2025-11-24 22:20:33.582899601 +0000 UTC m=+0.121849030 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, architecture=x86_64, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, release-0.7.12=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container)
Nov 24 22:20:33 compute-0 podman[247038]: 2025-11-24 22:20:33.586987988 +0000 UTC m=+0.109934227 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:20:33 compute-0 podman[247037]: 2025-11-24 22:20:33.601866792 +0000 UTC m=+0.136912419 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, version=9.6, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 24 22:20:33 compute-0 nova_compute[189608]: 2025-11-24 22:20:33.631 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:33 compute-0 nova_compute[189608]: 2025-11-24 22:20:33.711 189613 DEBUG nova.compute.provider_tree [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:20:33 compute-0 nova_compute[189608]: 2025-11-24 22:20:33.732 189613 DEBUG nova.scheduler.client.report [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:20:33 compute-0 nova_compute[189608]: 2025-11-24 22:20:33.757 189613 DEBUG oslo_concurrency.lockutils [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:33 compute-0 nova_compute[189608]: 2025-11-24 22:20:33.796 189613 INFO nova.scheduler.client.report [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Deleted allocations for instance 91f0c72b-d856-4250-89de-f420d598e74a
Nov 24 22:20:33 compute-0 nova_compute[189608]: 2025-11-24 22:20:33.895 189613 DEBUG oslo_concurrency.lockutils [None req-7db41b41-a4f3-441c-aed9-845b5c5a7fd0 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "91f0c72b-d856-4250-89de-f420d598e74a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:35 compute-0 nova_compute[189608]: 2025-11-24 22:20:35.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:20:35 compute-0 nova_compute[189608]: 2025-11-24 22:20:35.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 22:20:35 compute-0 nova_compute[189608]: 2025-11-24 22:20:35.814 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 22:20:37 compute-0 nova_compute[189608]: 2025-11-24 22:20:37.687 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:38 compute-0 podman[247090]: 2025-11-24 22:20:38.306841314 +0000 UTC m=+0.099632489 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:20:38 compute-0 nova_compute[189608]: 2025-11-24 22:20:38.635 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:38 compute-0 nova_compute[189608]: 2025-11-24 22:20:38.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:20:41 compute-0 podman[247115]: 2025-11-24 22:20:41.582633516 +0000 UTC m=+0.118491185 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 24 22:20:41 compute-0 podman[247114]: 2025-11-24 22:20:41.62425421 +0000 UTC m=+0.161921416 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 22:20:42 compute-0 nova_compute[189608]: 2025-11-24 22:20:42.690 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:43 compute-0 nova_compute[189608]: 2025-11-24 22:20:43.637 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:47 compute-0 nova_compute[189608]: 2025-11-24 22:20:47.551 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764022832.5489602, 91f0c72b-d856-4250-89de-f420d598e74a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:20:47 compute-0 nova_compute[189608]: 2025-11-24 22:20:47.552 189613 INFO nova.compute.manager [-] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] VM Stopped (Lifecycle Event)
Nov 24 22:20:47 compute-0 nova_compute[189608]: 2025-11-24 22:20:47.573 189613 DEBUG nova.compute.manager [None req-ecd7c542-a1b9-41c8-a623-34dee3d2fab2 - - - - - -] [instance: 91f0c72b-d856-4250-89de-f420d598e74a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:20:47 compute-0 nova_compute[189608]: 2025-11-24 22:20:47.692 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:48 compute-0 podman[247157]: 2025-11-24 22:20:48.553971582 +0000 UTC m=+0.100982332 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:20:48 compute-0 nova_compute[189608]: 2025-11-24 22:20:48.641 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:51 compute-0 sshd-session[246456]: Received disconnect from 38.102.83.200 port 42786:11: disconnected by user
Nov 24 22:20:51 compute-0 sshd-session[246456]: Disconnected from user zuul 38.102.83.200 port 42786
Nov 24 22:20:51 compute-0 sshd-session[246453]: pam_unix(sshd:session): session closed for user zuul
Nov 24 22:20:51 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Nov 24 22:20:51 compute-0 systemd[1]: session-30.scope: Consumed 1.409s CPU time.
Nov 24 22:20:51 compute-0 systemd-logind[806]: Session 30 logged out. Waiting for processes to exit.
Nov 24 22:20:51 compute-0 systemd-logind[806]: Removed session 30.
Nov 24 22:20:52 compute-0 nova_compute[189608]: 2025-11-24 22:20:52.696 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:53 compute-0 sshd-session[247182]: Invalid user sol from 45.148.10.240 port 47752
Nov 24 22:20:53 compute-0 sshd-session[247182]: Connection closed by invalid user sol 45.148.10.240 port 47752 [preauth]
Nov 24 22:20:53 compute-0 nova_compute[189608]: 2025-11-24 22:20:53.645 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:20:54.581 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:20:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:20:54.583 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:20:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:20:54.585 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:20:55 compute-0 podman[247184]: 2025-11-24 22:20:55.583079281 +0000 UTC m=+0.119451056 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 22:20:57 compute-0 nova_compute[189608]: 2025-11-24 22:20:57.699 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:58 compute-0 podman[247203]: 2025-11-24 22:20:58.581675741 +0000 UTC m=+0.130984544 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 24 22:20:58 compute-0 nova_compute[189608]: 2025-11-24 22:20:58.647 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:20:59 compute-0 podman[203795]: time="2025-11-24T22:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:20:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:20:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4816 "" "Go-http-client/1.1"
Nov 24 22:21:01 compute-0 openstack_network_exporter[205945]: ERROR   22:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:21:01 compute-0 openstack_network_exporter[205945]: ERROR   22:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:21:01 compute-0 openstack_network_exporter[205945]: ERROR   22:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:21:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:21:01 compute-0 openstack_network_exporter[205945]: ERROR   22:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:21:01 compute-0 openstack_network_exporter[205945]: ERROR   22:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:21:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:21:01 compute-0 anacron[237240]: Job `cron.daily' started
Nov 24 22:21:01 compute-0 anacron[237240]: Job `cron.daily' terminated
Nov 24 22:21:02 compute-0 nova_compute[189608]: 2025-11-24 22:21:02.703 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:03 compute-0 nova_compute[189608]: 2025-11-24 22:21:03.651 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:04 compute-0 podman[247223]: 2025-11-24 22:21:04.575002581 +0000 UTC m=+0.117076282 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, version=9.4, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, container_name=kepler, distribution-scope=public, release=1214.1726694543, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 24 22:21:04 compute-0 podman[247225]: 2025-11-24 22:21:04.600023689 +0000 UTC m=+0.127718253 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 22:21:04 compute-0 podman[247224]: 2025-11-24 22:21:04.602177216 +0000 UTC m=+0.136185207 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, release=1755695350, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 24 22:21:07 compute-0 nova_compute[189608]: 2025-11-24 22:21:07.707 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:08 compute-0 podman[247281]: 2025-11-24 22:21:08.574010682 +0000 UTC m=+0.111196548 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:21:08 compute-0 nova_compute[189608]: 2025-11-24 22:21:08.654 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:10 compute-0 sshd-session[247304]: Accepted publickey for zuul from 38.102.83.200 port 42706 ssh2: RSA SHA256:s+FA6JzLhmwu0B38HrAw2rBq4K/LlOWNM3GY6zwZaMs
Nov 24 22:21:10 compute-0 systemd-logind[806]: New session 31 of user zuul.
Nov 24 22:21:10 compute-0 systemd[1]: Started Session 31 of User zuul.
Nov 24 22:21:10 compute-0 sshd-session[247304]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 22:21:12 compute-0 sudo[247507]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfikotmzaonyjgoqqgrbxyefwznrsbqe ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764022871.1653752-60159-241299105237221/AnsiballZ_command.py'
Nov 24 22:21:12 compute-0 sudo[247507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:21:12 compute-0 podman[247457]: 2025-11-24 22:21:12.039565015 +0000 UTC m=+0.097165233 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 22:21:12 compute-0 podman[247456]: 2025-11-24 22:21:12.056180702 +0000 UTC m=+0.124499533 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 24 22:21:12 compute-0 python3[247523]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 22:21:12 compute-0 sudo[247507]: pam_unix(sudo:session): session closed for user root
Nov 24 22:21:12 compute-0 nova_compute[189608]: 2025-11-24 22:21:12.711 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:13 compute-0 nova_compute[189608]: 2025-11-24 22:21:13.656 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:15 compute-0 nova_compute[189608]: 2025-11-24 22:21:15.805 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:21:17 compute-0 nova_compute[189608]: 2025-11-24 22:21:17.715 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:18 compute-0 nova_compute[189608]: 2025-11-24 22:21:18.659 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:19 compute-0 podman[247562]: 2025-11-24 22:21:19.579866947 +0000 UTC m=+0.132717869 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:21:20 compute-0 sudo[247758]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcmpmrjyxbknohrsvdqsauncwrwmrzoq ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764022879.9300046-60323-191029049615609/AnsiballZ_command.py'
Nov 24 22:21:20 compute-0 sudo[247758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:21:20 compute-0 nova_compute[189608]: 2025-11-24 22:21:20.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:21:20 compute-0 nova_compute[189608]: 2025-11-24 22:21:20.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:21:20 compute-0 nova_compute[189608]: 2025-11-24 22:21:20.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:21:20 compute-0 python3[247760]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 22:21:20 compute-0 sudo[247758]: pam_unix(sudo:session): session closed for user root
Nov 24 22:21:21 compute-0 nova_compute[189608]: 2025-11-24 22:21:21.515 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:21:21 compute-0 nova_compute[189608]: 2025-11-24 22:21:21.516 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:21:21 compute-0 nova_compute[189608]: 2025-11-24 22:21:21.517 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:21:21 compute-0 nova_compute[189608]: 2025-11-24 22:21:21.517 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:21:22 compute-0 nova_compute[189608]: 2025-11-24 22:21:22.718 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:23 compute-0 nova_compute[189608]: 2025-11-24 22:21:23.662 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:23 compute-0 nova_compute[189608]: 2025-11-24 22:21:23.677 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updating instance_info_cache with network_info: [{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:21:23 compute-0 nova_compute[189608]: 2025-11-24 22:21:23.697 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:21:23 compute-0 nova_compute[189608]: 2025-11-24 22:21:23.698 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:21:26 compute-0 podman[247799]: 2025-11-24 22:21:26.630177766 +0000 UTC m=+0.174522997 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 24 22:21:27 compute-0 nova_compute[189608]: 2025-11-24 22:21:27.721 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:27 compute-0 nova_compute[189608]: 2025-11-24 22:21:27.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:21:27 compute-0 nova_compute[189608]: 2025-11-24 22:21:27.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:21:27 compute-0 nova_compute[189608]: 2025-11-24 22:21:27.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:21:27 compute-0 nova_compute[189608]: 2025-11-24 22:21:27.828 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:21:27 compute-0 nova_compute[189608]: 2025-11-24 22:21:27.829 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:21:27 compute-0 nova_compute[189608]: 2025-11-24 22:21:27.830 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:21:27 compute-0 nova_compute[189608]: 2025-11-24 22:21:27.830 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:21:27 compute-0 nova_compute[189608]: 2025-11-24 22:21:27.948 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.006 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.007 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.084 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.087 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:21:28 compute-0 rsyslogd[237036]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.151 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.153 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.253 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.264 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.362 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.363 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.444 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.446 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.514 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.516 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.598 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:21:28 compute-0 nova_compute[189608]: 2025-11-24 22:21:28.666 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:29 compute-0 nova_compute[189608]: 2025-11-24 22:21:29.119 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:21:29 compute-0 nova_compute[189608]: 2025-11-24 22:21:29.122 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4885MB free_disk=72.15608978271484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:21:29 compute-0 nova_compute[189608]: 2025-11-24 22:21:29.123 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:21:29 compute-0 nova_compute[189608]: 2025-11-24 22:21:29.123 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:21:29 compute-0 nova_compute[189608]: 2025-11-24 22:21:29.223 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:21:29 compute-0 nova_compute[189608]: 2025-11-24 22:21:29.223 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 7e7d375c-a42c-41c5-934f-c46941a40067 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:21:29 compute-0 nova_compute[189608]: 2025-11-24 22:21:29.224 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:21:29 compute-0 nova_compute[189608]: 2025-11-24 22:21:29.224 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:21:29 compute-0 nova_compute[189608]: 2025-11-24 22:21:29.295 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:21:29 compute-0 nova_compute[189608]: 2025-11-24 22:21:29.309 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:21:29 compute-0 nova_compute[189608]: 2025-11-24 22:21:29.341 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:21:29 compute-0 nova_compute[189608]: 2025-11-24 22:21:29.343 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:21:29 compute-0 podman[247845]: 2025-11-24 22:21:29.588299488 +0000 UTC m=+0.133113121 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 22:21:29 compute-0 podman[203795]: time="2025-11-24T22:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:21:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:21:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 24 22:21:30 compute-0 nova_compute[189608]: 2025-11-24 22:21:30.344 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:21:30 compute-0 nova_compute[189608]: 2025-11-24 22:21:30.345 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:21:30 compute-0 nova_compute[189608]: 2025-11-24 22:21:30.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:21:30 compute-0 sshd-session[247940]: Invalid user sol from 193.32.162.145 port 40942
Nov 24 22:21:30 compute-0 sudo[248038]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxeuhqbyvkuoclwmckkdjuhdcmplxdei ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764022890.1553013-60477-193101124230128/AnsiballZ_command.py'
Nov 24 22:21:31 compute-0 sudo[248038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:21:31 compute-0 sshd-session[247940]: Connection closed by invalid user sol 193.32.162.145 port 40942 [preauth]
Nov 24 22:21:31 compute-0 python3[248040]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 22:21:31 compute-0 sudo[248038]: pam_unix(sudo:session): session closed for user root
Nov 24 22:21:31 compute-0 openstack_network_exporter[205945]: ERROR   22:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:21:31 compute-0 openstack_network_exporter[205945]: ERROR   22:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:21:31 compute-0 openstack_network_exporter[205945]: ERROR   22:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:21:31 compute-0 openstack_network_exporter[205945]: ERROR   22:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:21:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:21:31 compute-0 openstack_network_exporter[205945]: ERROR   22:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:21:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:21:32 compute-0 nova_compute[189608]: 2025-11-24 22:21:32.724 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:32 compute-0 nova_compute[189608]: 2025-11-24 22:21:32.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:21:33 compute-0 nova_compute[189608]: 2025-11-24 22:21:33.669 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:33 compute-0 nova_compute[189608]: 2025-11-24 22:21:33.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:21:33 compute-0 nova_compute[189608]: 2025-11-24 22:21:33.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:21:35 compute-0 podman[248082]: 2025-11-24 22:21:35.565770647 +0000 UTC m=+0.091023363 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:21:35 compute-0 podman[248080]: 2025-11-24 22:21:35.566372464 +0000 UTC m=+0.101714323 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, version=9.4)
Nov 24 22:21:35 compute-0 podman[248081]: 2025-11-24 22:21:35.570644138 +0000 UTC m=+0.102016224 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 24 22:21:37 compute-0 nova_compute[189608]: 2025-11-24 22:21:37.727 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:38 compute-0 nova_compute[189608]: 2025-11-24 22:21:38.672 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:39 compute-0 podman[248139]: 2025-11-24 22:21:39.528678844 +0000 UTC m=+0.087377678 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:21:42 compute-0 podman[248165]: 2025-11-24 22:21:42.57529837 +0000 UTC m=+0.123142650 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 22:21:42 compute-0 podman[248164]: 2025-11-24 22:21:42.593603059 +0000 UTC m=+0.145021501 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 22:21:42 compute-0 nova_compute[189608]: 2025-11-24 22:21:42.730 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:43 compute-0 nova_compute[189608]: 2025-11-24 22:21:43.675 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:46 compute-0 sudo[248379]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weyldyejricvlxlmkyzbtegqstzzczrb ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764022905.8841255-60695-90781409491095/AnsiballZ_command.py'
Nov 24 22:21:46 compute-0 sudo[248379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:21:46 compute-0 python3[248381]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 22:21:47 compute-0 sudo[248379]: pam_unix(sudo:session): session closed for user root
Nov 24 22:21:47 compute-0 nova_compute[189608]: 2025-11-24 22:21:47.735 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:48 compute-0 nova_compute[189608]: 2025-11-24 22:21:48.677 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:50 compute-0 podman[248421]: 2025-11-24 22:21:50.569554577 +0000 UTC m=+0.114440380 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:21:52 compute-0 nova_compute[189608]: 2025-11-24 22:21:52.739 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:53 compute-0 nova_compute[189608]: 2025-11-24 22:21:53.681 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:21:54.582 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:21:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:21:54.584 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:21:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:21:54.585 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:21:57 compute-0 podman[248445]: 2025-11-24 22:21:57.594438784 +0000 UTC m=+0.136560898 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 24 22:21:57 compute-0 nova_compute[189608]: 2025-11-24 22:21:57.743 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:58 compute-0 nova_compute[189608]: 2025-11-24 22:21:58.683 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:21:59 compute-0 podman[203795]: time="2025-11-24T22:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:21:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:21:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 24 22:22:00 compute-0 podman[248465]: 2025-11-24 22:22:00.530076294 +0000 UTC m=+0.088440871 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 24 22:22:01 compute-0 openstack_network_exporter[205945]: ERROR   22:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:22:01 compute-0 openstack_network_exporter[205945]: ERROR   22:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:22:01 compute-0 openstack_network_exporter[205945]: ERROR   22:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:22:01 compute-0 openstack_network_exporter[205945]: ERROR   22:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:22:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:22:01 compute-0 openstack_network_exporter[205945]: ERROR   22:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:22:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:22:02 compute-0 nova_compute[189608]: 2025-11-24 22:22:02.746 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:03 compute-0 nova_compute[189608]: 2025-11-24 22:22:03.686 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:06 compute-0 podman[248485]: 2025-11-24 22:22:06.572030305 +0000 UTC m=+0.116360349 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, managed_by=edpm_ansible, version=9.4, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Nov 24 22:22:06 compute-0 podman[248486]: 2025-11-24 22:22:06.616299082 +0000 UTC m=+0.152593426 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter)
Nov 24 22:22:06 compute-0 podman[248487]: 2025-11-24 22:22:06.631325509 +0000 UTC m=+0.162545316 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:22:07 compute-0 nova_compute[189608]: 2025-11-24 22:22:07.750 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:08 compute-0 nova_compute[189608]: 2025-11-24 22:22:08.688 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:10 compute-0 podman[248543]: 2025-11-24 22:22:10.601619937 +0000 UTC m=+0.148038934 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:22:12 compute-0 nova_compute[189608]: 2025-11-24 22:22:12.753 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:13 compute-0 podman[248568]: 2025-11-24 22:22:13.585391046 +0000 UTC m=+0.124566544 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 22:22:13 compute-0 podman[248567]: 2025-11-24 22:22:13.63022046 +0000 UTC m=+0.179685449 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 22:22:13 compute-0 nova_compute[189608]: 2025-11-24 22:22:13.691 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:13 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.625 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.626 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.626 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.629 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.637 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7e7d375c-a42c-41c5-934f-c46941a40067', 'name': 'vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {'metering.server_group': 'b438824c-ce52-4539-9db6-355e0ca018db'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.644 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ea741b45-c6b4-41c0-a70f-c752b616faa2', 'name': 'test_0', 'flavor': {'id': 'cb3b4b8d-66d1-4524-ab48-d562ca8e5b4b', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a63b9561-12dc-4c11-858f-aa6fafbed036'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '309342b7e3e849b2a5dd56651d8fa068', 'user_id': '572aaac113f54af8a894707849aed6bf', 'hostId': '138597a5fd3bc726da772e57d320c412a04081cc4fa53fa9b77f5b6a', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.645 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.646 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:22:17.646173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.656 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.664 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.665 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.665 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.666 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.666 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.667 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.668 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.668 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:22:17.667546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.668 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.669 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.669 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.670 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.672 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:22:17.670422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.715 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.716 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.716 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.755 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.756 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.757 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.758 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:22:17 compute-0 nova_compute[189608]: 2025-11-24 22:22:17.757 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.758 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.758 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.758 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.759 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.759 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:22:17.759071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.881 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.882 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:17.882 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.005 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.006 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.007 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.008 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.009 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 742675991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.009 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 148600369 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.010 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.latency volume: 107984847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.010 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 667223647 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:22:18.008917) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.011 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 118019506 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.011 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.latency volume: 84934831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.012 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:22:18.013243) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.049 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/cpu volume: 41860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.081 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/cpu volume: 47960000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.082 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.083 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.084 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:22:18.083334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.084 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.085 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.085 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.086 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.087 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.088 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:22:18.088258) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.089 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.089 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.090 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.091 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.091 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.092 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.093 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.093 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.093 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.093 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.094 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:22:18.093611) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.094 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.094 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.095 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.095 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.096 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.096 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.097 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.097 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.098 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.098 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.098 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 2394094265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.099 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 18006599 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:22:18.098435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.099 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.100 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 2026985911 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.100 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 25037348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.101 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.102 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.103 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.103 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.103 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.103 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.104 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:22:18.103416) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.104 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.105 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.105 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.106 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.107 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.107 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.108 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.108 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.109 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.110 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.110 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.110 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.110 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.111 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.111 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:22:18.108182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.113 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:22:18.110945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.114 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.114 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:22:18.113987) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.115 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.116 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.116 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.117 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.117 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.117 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.117 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.117 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.117 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.118 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.118 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.118 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.119 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.119 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:22:18.116117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:22:18.117305) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:22:18.118886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.120 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.120 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.120 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.120 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.121 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.121 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.121 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.122 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:22:18.120906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.122 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.122 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.123 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.123 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.124 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.125 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.125 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.126 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.126 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.126 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.126 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.127 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.127 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.128 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:22:18.122790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.128 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.128 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:22:18.125211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.128 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.129 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.129 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.129 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.130 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.130 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.130 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.130 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.131 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.131 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.131 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.132 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.133 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.133 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.133 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.133 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.133 14 DEBUG ceilometer.compute.pollsters [-] 7e7d375c-a42c-41c5-934f-c46941a40067/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.133 14 DEBUG ceilometer.compute.pollsters [-] ea741b45-c6b4-41c0-a70f-c752b616faa2/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.134 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.134 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:22:18.126749) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:22:18.128861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:22:18.130408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:22:18.133287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:22:18.140 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:22:18 compute-0 nova_compute[189608]: 2025-11-24 22:22:18.694 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:21 compute-0 podman[248616]: 2025-11-24 22:22:21.572864593 +0000 UTC m=+0.113877612 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:22:22 compute-0 nova_compute[189608]: 2025-11-24 22:22:22.762 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:22 compute-0 nova_compute[189608]: 2025-11-24 22:22:22.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:22:22 compute-0 nova_compute[189608]: 2025-11-24 22:22:22.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:22:23 compute-0 nova_compute[189608]: 2025-11-24 22:22:23.055 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:22:23 compute-0 nova_compute[189608]: 2025-11-24 22:22:23.056 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:22:23 compute-0 nova_compute[189608]: 2025-11-24 22:22:23.056 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:22:23 compute-0 nova_compute[189608]: 2025-11-24 22:22:23.696 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:24 compute-0 nova_compute[189608]: 2025-11-24 22:22:24.901 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Updating instance_info_cache with network_info: [{"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:22:24 compute-0 nova_compute[189608]: 2025-11-24 22:22:24.915 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:22:24 compute-0 nova_compute[189608]: 2025-11-24 22:22:24.916 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:22:27 compute-0 nova_compute[189608]: 2025-11-24 22:22:27.763 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:27 compute-0 nova_compute[189608]: 2025-11-24 22:22:27.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:22:27 compute-0 nova_compute[189608]: 2025-11-24 22:22:27.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:22:28 compute-0 podman[248638]: 2025-11-24 22:22:28.545071734 +0000 UTC m=+0.103199011 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 22:22:28 compute-0 nova_compute[189608]: 2025-11-24 22:22:28.699 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:28 compute-0 nova_compute[189608]: 2025-11-24 22:22:28.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:22:28 compute-0 nova_compute[189608]: 2025-11-24 22:22:28.823 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:22:28 compute-0 nova_compute[189608]: 2025-11-24 22:22:28.824 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:22:28 compute-0 nova_compute[189608]: 2025-11-24 22:22:28.824 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:22:28 compute-0 nova_compute[189608]: 2025-11-24 22:22:28.824 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:22:28 compute-0 nova_compute[189608]: 2025-11-24 22:22:28.934 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.036 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.039 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.113 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.115 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.199 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.203 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.277 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.289 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.371 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.373 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.469 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.470 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.571 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.572 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:22:29 compute-0 nova_compute[189608]: 2025-11-24 22:22:29.674 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:22:29 compute-0 podman[203795]: time="2025-11-24T22:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:22:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:22:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 24 22:22:30 compute-0 nova_compute[189608]: 2025-11-24 22:22:30.299 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:22:30 compute-0 nova_compute[189608]: 2025-11-24 22:22:30.302 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4854MB free_disk=72.15608978271484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:22:30 compute-0 nova_compute[189608]: 2025-11-24 22:22:30.302 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:22:30 compute-0 nova_compute[189608]: 2025-11-24 22:22:30.303 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:22:30 compute-0 nova_compute[189608]: 2025-11-24 22:22:30.413 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:22:30 compute-0 nova_compute[189608]: 2025-11-24 22:22:30.414 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 7e7d375c-a42c-41c5-934f-c46941a40067 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:22:30 compute-0 nova_compute[189608]: 2025-11-24 22:22:30.414 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:22:30 compute-0 nova_compute[189608]: 2025-11-24 22:22:30.414 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:22:30 compute-0 nova_compute[189608]: 2025-11-24 22:22:30.494 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:22:30 compute-0 nova_compute[189608]: 2025-11-24 22:22:30.510 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:22:30 compute-0 nova_compute[189608]: 2025-11-24 22:22:30.513 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:22:30 compute-0 nova_compute[189608]: 2025-11-24 22:22:30.514 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:22:31 compute-0 openstack_network_exporter[205945]: ERROR   22:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:22:31 compute-0 openstack_network_exporter[205945]: ERROR   22:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:22:31 compute-0 openstack_network_exporter[205945]: ERROR   22:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:22:31 compute-0 openstack_network_exporter[205945]: ERROR   22:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:22:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:22:31 compute-0 openstack_network_exporter[205945]: ERROR   22:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:22:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:22:31 compute-0 podman[248681]: 2025-11-24 22:22:31.600289974 +0000 UTC m=+0.142826443 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 22:22:32 compute-0 nova_compute[189608]: 2025-11-24 22:22:32.515 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:22:32 compute-0 nova_compute[189608]: 2025-11-24 22:22:32.516 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:22:32 compute-0 nova_compute[189608]: 2025-11-24 22:22:32.517 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:22:32 compute-0 nova_compute[189608]: 2025-11-24 22:22:32.768 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:32 compute-0 nova_compute[189608]: 2025-11-24 22:22:32.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:22:33 compute-0 nova_compute[189608]: 2025-11-24 22:22:33.704 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:34 compute-0 nova_compute[189608]: 2025-11-24 22:22:34.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:22:34 compute-0 nova_compute[189608]: 2025-11-24 22:22:34.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:22:37 compute-0 podman[248701]: 2025-11-24 22:22:37.5649285 +0000 UTC m=+0.103258102 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, managed_by=edpm_ansible, name=ubi9)
Nov 24 22:22:37 compute-0 podman[248703]: 2025-11-24 22:22:37.612886162 +0000 UTC m=+0.139048086 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:22:37 compute-0 podman[248702]: 2025-11-24 22:22:37.617073741 +0000 UTC m=+0.149753957 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 24 22:22:37 compute-0 nova_compute[189608]: 2025-11-24 22:22:37.771 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:38 compute-0 nova_compute[189608]: 2025-11-24 22:22:38.707 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:41 compute-0 podman[248759]: 2025-11-24 22:22:41.580858506 +0000 UTC m=+0.121226830 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:22:42 compute-0 nova_compute[189608]: 2025-11-24 22:22:42.773 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:43 compute-0 nova_compute[189608]: 2025-11-24 22:22:43.710 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:44 compute-0 podman[248782]: 2025-11-24 22:22:44.548985319 +0000 UTC m=+0.085950884 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:22:44 compute-0 podman[248781]: 2025-11-24 22:22:44.674699058 +0000 UTC m=+0.208381750 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 22:22:46 compute-0 sshd-session[247307]: Received disconnect from 38.102.83.200 port 42706:11: disconnected by user
Nov 24 22:22:46 compute-0 sshd-session[247307]: Disconnected from user zuul 38.102.83.200 port 42706
Nov 24 22:22:46 compute-0 sshd-session[247304]: pam_unix(sshd:session): session closed for user zuul
Nov 24 22:22:46 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Nov 24 22:22:46 compute-0 systemd[1]: session-31.scope: Consumed 5.395s CPU time.
Nov 24 22:22:46 compute-0 systemd-logind[806]: Session 31 logged out. Waiting for processes to exit.
Nov 24 22:22:46 compute-0 systemd-logind[806]: Removed session 31.
Nov 24 22:22:47 compute-0 sshd-session[248822]: Invalid user sol from 45.148.10.240 port 43450
Nov 24 22:22:47 compute-0 sshd-session[248822]: Connection closed by invalid user sol 45.148.10.240 port 43450 [preauth]
Nov 24 22:22:47 compute-0 nova_compute[189608]: 2025-11-24 22:22:47.777 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:48 compute-0 nova_compute[189608]: 2025-11-24 22:22:48.713 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:52 compute-0 podman[248825]: 2025-11-24 22:22:52.589705252 +0000 UTC m=+0.126292588 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:22:52 compute-0 nova_compute[189608]: 2025-11-24 22:22:52.784 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:53 compute-0 nova_compute[189608]: 2025-11-24 22:22:53.715 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:22:54.583 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:22:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:22:54.584 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:22:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:22:54.584 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:22:57 compute-0 nova_compute[189608]: 2025-11-24 22:22:57.787 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:58 compute-0 nova_compute[189608]: 2025-11-24 22:22:58.718 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:22:59 compute-0 podman[248848]: 2025-11-24 22:22:59.606128139 +0000 UTC m=+0.144185076 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 22:22:59 compute-0 podman[203795]: time="2025-11-24T22:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:22:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:22:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Nov 24 22:23:01 compute-0 openstack_network_exporter[205945]: ERROR   22:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:23:01 compute-0 openstack_network_exporter[205945]: ERROR   22:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:23:01 compute-0 openstack_network_exporter[205945]: ERROR   22:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:23:01 compute-0 openstack_network_exporter[205945]: ERROR   22:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:23:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:23:01 compute-0 openstack_network_exporter[205945]: ERROR   22:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:23:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:23:02 compute-0 podman[248867]: 2025-11-24 22:23:02.550647454 +0000 UTC m=+0.099900287 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 24 22:23:02 compute-0 nova_compute[189608]: 2025-11-24 22:23:02.791 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:03 compute-0 nova_compute[189608]: 2025-11-24 22:23:03.721 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:07 compute-0 nova_compute[189608]: 2025-11-24 22:23:07.793 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:08 compute-0 podman[248884]: 2025-11-24 22:23:08.557243765 +0000 UTC m=+0.106616557 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, managed_by=edpm_ansible, release-0.7.12=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, version=9.4, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 22:23:08 compute-0 podman[248886]: 2025-11-24 22:23:08.569898309 +0000 UTC m=+0.095936475 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:23:08 compute-0 podman[248885]: 2025-11-24 22:23:08.580063635 +0000 UTC m=+0.127362762 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 22:23:08 compute-0 nova_compute[189608]: 2025-11-24 22:23:08.724 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:12 compute-0 podman[248940]: 2025-11-24 22:23:12.574923496 +0000 UTC m=+0.116622197 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 24 22:23:12 compute-0 nova_compute[189608]: 2025-11-24 22:23:12.796 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:13 compute-0 nova_compute[189608]: 2025-11-24 22:23:13.726 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:14 compute-0 podman[248964]: 2025-11-24 22:23:14.81699827 +0000 UTC m=+0.108496604 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 22:23:14 compute-0 podman[248965]: 2025-11-24 22:23:14.911825069 +0000 UTC m=+0.197885414 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 22:23:16 compute-0 nova_compute[189608]: 2025-11-24 22:23:16.790 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:23:17 compute-0 nova_compute[189608]: 2025-11-24 22:23:17.799 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:18 compute-0 nova_compute[189608]: 2025-11-24 22:23:18.730 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:22 compute-0 nova_compute[189608]: 2025-11-24 22:23:22.802 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:23 compute-0 podman[249010]: 2025-11-24 22:23:23.570470669 +0000 UTC m=+0.107954469 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:23:23 compute-0 nova_compute[189608]: 2025-11-24 22:23:23.732 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:24 compute-0 nova_compute[189608]: 2025-11-24 22:23:24.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:23:24 compute-0 nova_compute[189608]: 2025-11-24 22:23:24.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:23:24 compute-0 nova_compute[189608]: 2025-11-24 22:23:24.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:23:25 compute-0 nova_compute[189608]: 2025-11-24 22:23:25.618 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:23:25 compute-0 nova_compute[189608]: 2025-11-24 22:23:25.619 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:23:25 compute-0 nova_compute[189608]: 2025-11-24 22:23:25.619 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:23:25 compute-0 nova_compute[189608]: 2025-11-24 22:23:25.620 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:23:27 compute-0 nova_compute[189608]: 2025-11-24 22:23:27.669 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updating instance_info_cache with network_info: [{"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:23:27 compute-0 nova_compute[189608]: 2025-11-24 22:23:27.687 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-ea741b45-c6b4-41c0-a70f-c752b616faa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:23:27 compute-0 nova_compute[189608]: 2025-11-24 22:23:27.688 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:23:27 compute-0 nova_compute[189608]: 2025-11-24 22:23:27.806 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:28 compute-0 nova_compute[189608]: 2025-11-24 22:23:28.735 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:28 compute-0 nova_compute[189608]: 2025-11-24 22:23:28.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:23:28 compute-0 nova_compute[189608]: 2025-11-24 22:23:28.827 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:23:28 compute-0 nova_compute[189608]: 2025-11-24 22:23:28.829 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:23:28 compute-0 nova_compute[189608]: 2025-11-24 22:23:28.830 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:23:28 compute-0 nova_compute[189608]: 2025-11-24 22:23:28.831 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:23:28 compute-0 nova_compute[189608]: 2025-11-24 22:23:28.947 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.044 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.046 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.107 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.109 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.207 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.209 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.304 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.313 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.396 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.398 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.465 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.468 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.532 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.535 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:23:29 compute-0 nova_compute[189608]: 2025-11-24 22:23:29.599 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:23:29 compute-0 podman[203795]: time="2025-11-24T22:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:23:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:23:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 24 22:23:30 compute-0 nova_compute[189608]: 2025-11-24 22:23:30.266 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:23:30 compute-0 nova_compute[189608]: 2025-11-24 22:23:30.269 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4861MB free_disk=72.15608978271484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:23:30 compute-0 nova_compute[189608]: 2025-11-24 22:23:30.270 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:23:30 compute-0 nova_compute[189608]: 2025-11-24 22:23:30.271 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:23:30 compute-0 nova_compute[189608]: 2025-11-24 22:23:30.384 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance ea741b45-c6b4-41c0-a70f-c752b616faa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:23:30 compute-0 nova_compute[189608]: 2025-11-24 22:23:30.385 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 7e7d375c-a42c-41c5-934f-c46941a40067 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:23:30 compute-0 nova_compute[189608]: 2025-11-24 22:23:30.386 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:23:30 compute-0 nova_compute[189608]: 2025-11-24 22:23:30.387 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:23:30 compute-0 nova_compute[189608]: 2025-11-24 22:23:30.486 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:23:30 compute-0 nova_compute[189608]: 2025-11-24 22:23:30.512 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:23:30 compute-0 nova_compute[189608]: 2025-11-24 22:23:30.516 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:23:30 compute-0 nova_compute[189608]: 2025-11-24 22:23:30.516 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:23:30 compute-0 podman[249058]: 2025-11-24 22:23:30.604880206 +0000 UTC m=+0.157435178 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:23:31 compute-0 openstack_network_exporter[205945]: ERROR   22:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:23:31 compute-0 openstack_network_exporter[205945]: ERROR   22:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:23:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:23:31 compute-0 openstack_network_exporter[205945]: ERROR   22:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:23:31 compute-0 openstack_network_exporter[205945]: ERROR   22:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:23:31 compute-0 openstack_network_exporter[205945]: ERROR   22:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:23:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:23:31 compute-0 nova_compute[189608]: 2025-11-24 22:23:31.513 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:23:31 compute-0 nova_compute[189608]: 2025-11-24 22:23:31.514 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:23:31 compute-0 nova_compute[189608]: 2025-11-24 22:23:31.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:23:31 compute-0 nova_compute[189608]: 2025-11-24 22:23:31.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:23:32 compute-0 nova_compute[189608]: 2025-11-24 22:23:32.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:23:32 compute-0 nova_compute[189608]: 2025-11-24 22:23:32.809 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:33 compute-0 podman[249078]: 2025-11-24 22:23:33.574763521 +0000 UTC m=+0.125643138 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Nov 24 22:23:33 compute-0 nova_compute[189608]: 2025-11-24 22:23:33.738 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:34 compute-0 nova_compute[189608]: 2025-11-24 22:23:34.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:23:36 compute-0 nova_compute[189608]: 2025-11-24 22:23:36.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:23:36 compute-0 nova_compute[189608]: 2025-11-24 22:23:36.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:23:37 compute-0 nova_compute[189608]: 2025-11-24 22:23:37.812 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:38 compute-0 nova_compute[189608]: 2025-11-24 22:23:38.741 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:39 compute-0 podman[249101]: 2025-11-24 22:23:39.55657973 +0000 UTC m=+0.096001846 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 22:23:39 compute-0 podman[249100]: 2025-11-24 22:23:39.584676294 +0000 UTC m=+0.120772626 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Nov 24 22:23:39 compute-0 podman[249099]: 2025-11-24 22:23:39.586760459 +0000 UTC m=+0.127770615 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 24 22:23:42 compute-0 nova_compute[189608]: 2025-11-24 22:23:42.816 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:43 compute-0 podman[249160]: 2025-11-24 22:23:43.601628472 +0000 UTC m=+0.139841521 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:23:43 compute-0 nova_compute[189608]: 2025-11-24 22:23:43.745 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:45 compute-0 podman[249184]: 2025-11-24 22:23:45.602683011 +0000 UTC m=+0.133703929 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 22:23:45 compute-0 podman[249183]: 2025-11-24 22:23:45.654822402 +0000 UTC m=+0.196242964 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:23:47 compute-0 nova_compute[189608]: 2025-11-24 22:23:47.824 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:48 compute-0 nova_compute[189608]: 2025-11-24 22:23:48.749 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.192 189613 DEBUG nova.compute.manager [req-d8474ca7-be21-4526-baad-5060c512a501 req-8673409e-97a8-4e0e-8085-4959a52befa8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Received event network-changed-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.193 189613 DEBUG nova.compute.manager [req-d8474ca7-be21-4526-baad-5060c512a501 req-8673409e-97a8-4e0e-8085-4959a52befa8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Refreshing instance network info cache due to event network-changed-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.194 189613 DEBUG oslo_concurrency.lockutils [req-d8474ca7-be21-4526-baad-5060c512a501 req-8673409e-97a8-4e0e-8085-4959a52befa8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.195 189613 DEBUG oslo_concurrency.lockutils [req-d8474ca7-be21-4526-baad-5060c512a501 req-8673409e-97a8-4e0e-8085-4959a52befa8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.196 189613 DEBUG nova.network.neutron [req-d8474ca7-be21-4526-baad-5060c512a501 req-8673409e-97a8-4e0e-8085-4959a52befa8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Refreshing network info cache for port 482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.458 189613 DEBUG oslo_concurrency.lockutils [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "7e7d375c-a42c-41c5-934f-c46941a40067" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.459 189613 DEBUG oslo_concurrency.lockutils [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.459 189613 DEBUG oslo_concurrency.lockutils [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.460 189613 DEBUG oslo_concurrency.lockutils [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.461 189613 DEBUG oslo_concurrency.lockutils [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.463 189613 INFO nova.compute.manager [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Terminating instance
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.465 189613 DEBUG nova.compute.manager [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.481 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.483 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.488 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:52 compute-0 kernel: tap482c3cfb-c1 (unregistering): left promiscuous mode
Nov 24 22:23:52 compute-0 NetworkManager[56413]: <info>  [1764023032.5349] device (tap482c3cfb-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.547 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:52 compute-0 ovn_controller[97889]: 2025-11-24T22:23:52Z|00058|binding|INFO|Releasing lport 482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 from this chassis (sb_readonly=0)
Nov 24 22:23:52 compute-0 ovn_controller[97889]: 2025-11-24T22:23:52Z|00059|binding|INFO|Setting lport 482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 down in Southbound
Nov 24 22:23:52 compute-0 ovn_controller[97889]: 2025-11-24T22:23:52Z|00060|binding|INFO|Removing iface tap482c3cfb-c1 ovn-installed in OVS
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.559 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.580 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:8e:1d 192.168.0.49'], port_security=['fa:16:3e:b3:8e:1d 192.168.0.49'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-mikdi7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-port-2mhfyk5gloal', 'neutron:cidrs': '192.168.0.49/24', 'neutron:device_id': '7e7d375c-a42c-41c5-934f-c46941a40067', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-mikdi7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-port-2mhfyk5gloal', 'neutron:project_id': '309342b7e3e849b2a5dd56651d8fa068', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c15160ba-5e91-4c5e-8c90-4266712c07a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b82cef4e-b720-4309-9703-51bb202fedba, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=482c3cfb-c114-4d01-aa49-09b8d4fdaaa5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.583 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.583 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 in datapath 1d1b3625-954d-4d8b-8b3f-323c25d9b42a unbound from our chassis
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.584 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d1b3625-954d-4d8b-8b3f-323c25d9b42a
Nov 24 22:23:52 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 24 22:23:52 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2min 9.684s CPU time.
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.606 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f18c5a-6a82-4aeb-99f0-34b30f138934]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:23:52 compute-0 systemd-machined[155884]: Machine qemu-4-instance-00000004 terminated.
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.643 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc4aa5b-1b19-4aed-b006-a21f45860634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.646 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[26c14fab-d142-46fc-b0c6-69c6be3ded54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.672 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4b5432-e4cc-4fe9-8279-1d79d5ec67e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.694 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[2b636179-4a51-40e4-9508-101cdb928d03]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d1b3625-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:e7:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373788, 'reachable_time': 34416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249238, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.714 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[049bc038-3fba-4488-992a-5b668b9069ff]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1d1b3625-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373805, 'tstamp': 373805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249244, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap1d1b3625-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373810, 'tstamp': 373810}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249244, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.716 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d1b3625-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.719 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.727 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.727 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d1b3625-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.728 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.728 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d1b3625-90, col_values=(('external_ids', {'iface-id': '13073b7d-8165-42cd-87f4-fb1eb15a5b94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:23:52 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:52.728 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.769 189613 INFO nova.virt.libvirt.driver [-] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Instance destroyed successfully.
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.770 189613 DEBUG nova.objects.instance [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'resources' on Instance uuid 7e7d375c-a42c-41c5-934f-c46941a40067 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.784 189613 DEBUG nova.virt.libvirt.vif [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:13:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7lqlsa5-qwni3m7kkkje-hcw6uchrxoeb-vnf-7cvls2zpo5gs',id=4,image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:13:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b438824c-ce52-4539-9db6-355e0ca018db'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='309342b7e3e849b2a5dd56651d8fa068',ramdisk_id='',reservation_id='r-anpt6b1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:13:34Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03OTgzMDI3NDg5Mzk1MTMwMDY5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc5ODMwMjc0ODkzOTUxMzAwNjk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Nzk4MzAyNzQ4OTM5NTEzMDA2OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc5ODMwMjc0ODkzOTUxMzAwNjk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03OTgzMDI3NDg5Mzk1MTMwMDY5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03OTgzMDI3NDg5Mzk1MTMwMDY5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Nov 24 22:23:52 compute-0 nova_compute[189608]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Nzk4MzAyNzQ4OTM5NTEzMDA2OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc5ODMwMjc0ODkzOTUxMzAwNjk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03OTgzMDI3NDg5Mzk1MTMwMDY5PT0tLQo=',user_id='572aaac113f54af8a894707849aed6bf',uuid=7e7d375c-a42c-41c5-934f-c46941a40067,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.784 189613 DEBUG nova.network.os_vif_util [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converting VIF {"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.785 189613 DEBUG nova.network.os_vif_util [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:8e:1d,bridge_name='br-int',has_traffic_filtering=True,id=482c3cfb-c114-4d01-aa49-09b8d4fdaaa5,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap482c3cfb-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.786 189613 DEBUG os_vif [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:8e:1d,bridge_name='br-int',has_traffic_filtering=True,id=482c3cfb-c114-4d01-aa49-09b8d4fdaaa5,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap482c3cfb-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.788 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.789 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap482c3cfb-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.791 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.793 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.795 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.798 189613 INFO os_vif [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:8e:1d,bridge_name='br-int',has_traffic_filtering=True,id=482c3cfb-c114-4d01-aa49-09b8d4fdaaa5,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap482c3cfb-c1')
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.799 189613 INFO nova.virt.libvirt.driver [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Deleting instance files /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067_del
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.801 189613 INFO nova.virt.libvirt.driver [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Deletion of /var/lib/nova/instances/7e7d375c-a42c-41c5-934f-c46941a40067_del complete
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.843 189613 DEBUG nova.compute.manager [req-b9928ec2-48ff-4491-b015-709d12d359ee req-1523e2f3-14d1-4a77-a25d-c31f05f4298f c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Received event network-vif-unplugged-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.844 189613 DEBUG oslo_concurrency.lockutils [req-b9928ec2-48ff-4491-b015-709d12d359ee req-1523e2f3-14d1-4a77-a25d-c31f05f4298f c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.844 189613 DEBUG oslo_concurrency.lockutils [req-b9928ec2-48ff-4491-b015-709d12d359ee req-1523e2f3-14d1-4a77-a25d-c31f05f4298f c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.845 189613 DEBUG oslo_concurrency.lockutils [req-b9928ec2-48ff-4491-b015-709d12d359ee req-1523e2f3-14d1-4a77-a25d-c31f05f4298f c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.845 189613 DEBUG nova.compute.manager [req-b9928ec2-48ff-4491-b015-709d12d359ee req-1523e2f3-14d1-4a77-a25d-c31f05f4298f c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] No waiting events found dispatching network-vif-unplugged-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.846 189613 DEBUG nova.compute.manager [req-b9928ec2-48ff-4491-b015-709d12d359ee req-1523e2f3-14d1-4a77-a25d-c31f05f4298f c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Received event network-vif-unplugged-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.882 189613 INFO nova.compute.manager [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Took 0.42 seconds to destroy the instance on the hypervisor.
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.883 189613 DEBUG oslo.service.loopingcall [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.885 189613 DEBUG nova.compute.manager [-] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:23:52 compute-0 nova_compute[189608]: 2025-11-24 22:23:52.885 189613 DEBUG nova.network.neutron [-] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:23:53 compute-0 rsyslogd[237036]: message too long (8192) with configured size 8096, begin of message is: 2025-11-24 22:23:52.784 189613 DEBUG nova.virt.libvirt.vif [None req-28891be2-f9 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 24 22:23:53 compute-0 nova_compute[189608]: 2025-11-24 22:23:53.636 189613 DEBUG nova.network.neutron [req-d8474ca7-be21-4526-baad-5060c512a501 req-8673409e-97a8-4e0e-8085-4959a52befa8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Updated VIF entry in instance network info cache for port 482c3cfb-c114-4d01-aa49-09b8d4fdaaa5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:23:53 compute-0 nova_compute[189608]: 2025-11-24 22:23:53.637 189613 DEBUG nova.network.neutron [req-d8474ca7-be21-4526-baad-5060c512a501 req-8673409e-97a8-4e0e-8085-4959a52befa8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Updating instance_info_cache with network_info: [{"id": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "address": "fa:16:3e:b3:8e:1d", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.49", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482c3cfb-c1", "ovs_interfaceid": "482c3cfb-c114-4d01-aa49-09b8d4fdaaa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:23:53 compute-0 nova_compute[189608]: 2025-11-24 22:23:53.663 189613 DEBUG oslo_concurrency.lockutils [req-d8474ca7-be21-4526-baad-5060c512a501 req-8673409e-97a8-4e0e-8085-4959a52befa8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-7e7d375c-a42c-41c5-934f-c46941a40067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:23:53 compute-0 nova_compute[189608]: 2025-11-24 22:23:53.754 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:53 compute-0 nova_compute[189608]: 2025-11-24 22:23:53.972 189613 DEBUG nova.network.neutron [-] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:23:53 compute-0 nova_compute[189608]: 2025-11-24 22:23:53.989 189613 INFO nova.compute.manager [-] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Took 1.10 seconds to deallocate network for instance.
Nov 24 22:23:54 compute-0 nova_compute[189608]: 2025-11-24 22:23:54.027 189613 DEBUG oslo_concurrency.lockutils [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:23:54 compute-0 nova_compute[189608]: 2025-11-24 22:23:54.028 189613 DEBUG oslo_concurrency.lockutils [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:23:54 compute-0 nova_compute[189608]: 2025-11-24 22:23:54.103 189613 DEBUG nova.compute.provider_tree [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:23:54 compute-0 nova_compute[189608]: 2025-11-24 22:23:54.118 189613 DEBUG nova.scheduler.client.report [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:23:54 compute-0 nova_compute[189608]: 2025-11-24 22:23:54.142 189613 DEBUG oslo_concurrency.lockutils [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:23:54 compute-0 nova_compute[189608]: 2025-11-24 22:23:54.174 189613 INFO nova.scheduler.client.report [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Deleted allocations for instance 7e7d375c-a42c-41c5-934f-c46941a40067
Nov 24 22:23:54 compute-0 nova_compute[189608]: 2025-11-24 22:23:54.260 189613 DEBUG oslo_concurrency.lockutils [None req-28891be2-f986-44ee-99b1-28e96865f29d 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:23:54 compute-0 podman[249262]: 2025-11-24 22:23:54.55725424 +0000 UTC m=+0.100486256 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:23:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:54.584 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:23:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:54.585 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:23:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:54.586 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:23:54 compute-0 nova_compute[189608]: 2025-11-24 22:23:54.945 189613 DEBUG nova.compute.manager [req-95ceb83f-99a2-4fd5-bd9c-b78e11fbe6c3 req-82134d3f-7d5e-4478-b2fc-489050404f87 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Received event network-vif-plugged-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:23:54 compute-0 nova_compute[189608]: 2025-11-24 22:23:54.946 189613 DEBUG oslo_concurrency.lockutils [req-95ceb83f-99a2-4fd5-bd9c-b78e11fbe6c3 req-82134d3f-7d5e-4478-b2fc-489050404f87 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:23:54 compute-0 nova_compute[189608]: 2025-11-24 22:23:54.946 189613 DEBUG oslo_concurrency.lockutils [req-95ceb83f-99a2-4fd5-bd9c-b78e11fbe6c3 req-82134d3f-7d5e-4478-b2fc-489050404f87 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:23:54 compute-0 nova_compute[189608]: 2025-11-24 22:23:54.947 189613 DEBUG oslo_concurrency.lockutils [req-95ceb83f-99a2-4fd5-bd9c-b78e11fbe6c3 req-82134d3f-7d5e-4478-b2fc-489050404f87 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e7d375c-a42c-41c5-934f-c46941a40067-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:23:54 compute-0 nova_compute[189608]: 2025-11-24 22:23:54.947 189613 DEBUG nova.compute.manager [req-95ceb83f-99a2-4fd5-bd9c-b78e11fbe6c3 req-82134d3f-7d5e-4478-b2fc-489050404f87 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] No waiting events found dispatching network-vif-plugged-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:23:54 compute-0 nova_compute[189608]: 2025-11-24 22:23:54.947 189613 WARNING nova.compute.manager [req-95ceb83f-99a2-4fd5-bd9c-b78e11fbe6c3 req-82134d3f-7d5e-4478-b2fc-489050404f87 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Received unexpected event network-vif-plugged-482c3cfb-c114-4d01-aa49-09b8d4fdaaa5 for instance with vm_state deleted and task_state None.
Nov 24 22:23:56 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:23:56.486 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:23:57 compute-0 nova_compute[189608]: 2025-11-24 22:23:57.792 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:58 compute-0 nova_compute[189608]: 2025-11-24 22:23:58.756 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:23:59 compute-0 podman[203795]: time="2025-11-24T22:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:23:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 24 22:23:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Nov 24 22:24:01 compute-0 openstack_network_exporter[205945]: ERROR   22:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:24:01 compute-0 openstack_network_exporter[205945]: ERROR   22:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:24:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:24:01 compute-0 openstack_network_exporter[205945]: ERROR   22:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:24:01 compute-0 openstack_network_exporter[205945]: ERROR   22:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:24:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:24:01 compute-0 openstack_network_exporter[205945]: ERROR   22:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:24:01 compute-0 podman[249284]: 2025-11-24 22:24:01.572236654 +0000 UTC m=+0.120629592 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 22:24:02 compute-0 nova_compute[189608]: 2025-11-24 22:24:02.795 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:03 compute-0 nova_compute[189608]: 2025-11-24 22:24:03.759 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:04 compute-0 podman[249305]: 2025-11-24 22:24:04.582176487 +0000 UTC m=+0.125326648 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118)
Nov 24 22:24:07 compute-0 nova_compute[189608]: 2025-11-24 22:24:07.765 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764023032.7638834, 7e7d375c-a42c-41c5-934f-c46941a40067 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:24:07 compute-0 nova_compute[189608]: 2025-11-24 22:24:07.766 189613 INFO nova.compute.manager [-] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] VM Stopped (Lifecycle Event)
Nov 24 22:24:07 compute-0 nova_compute[189608]: 2025-11-24 22:24:07.791 189613 DEBUG nova.compute.manager [None req-faa1e518-2701-42fa-aa1f-cd45657bfe8b - - - - - -] [instance: 7e7d375c-a42c-41c5-934f-c46941a40067] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:24:07 compute-0 nova_compute[189608]: 2025-11-24 22:24:07.799 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:08 compute-0 nova_compute[189608]: 2025-11-24 22:24:08.761 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:10 compute-0 podman[249325]: 2025-11-24 22:24:10.553862194 +0000 UTC m=+0.098963918 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, vcs-type=git)
Nov 24 22:24:10 compute-0 podman[249324]: 2025-11-24 22:24:10.57593574 +0000 UTC m=+0.126179945 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release-0.7.12=, version=9.4, distribution-scope=public, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 22:24:10 compute-0 podman[249326]: 2025-11-24 22:24:10.579633105 +0000 UTC m=+0.112197110 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 24 22:24:10 compute-0 nova_compute[189608]: 2025-11-24 22:24:10.889 189613 DEBUG oslo_concurrency.lockutils [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "ea741b45-c6b4-41c0-a70f-c752b616faa2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:24:10 compute-0 nova_compute[189608]: 2025-11-24 22:24:10.890 189613 DEBUG oslo_concurrency.lockutils [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:24:10 compute-0 nova_compute[189608]: 2025-11-24 22:24:10.890 189613 DEBUG oslo_concurrency.lockutils [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:24:10 compute-0 nova_compute[189608]: 2025-11-24 22:24:10.891 189613 DEBUG oslo_concurrency.lockutils [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:24:10 compute-0 nova_compute[189608]: 2025-11-24 22:24:10.891 189613 DEBUG oslo_concurrency.lockutils [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:24:10 compute-0 nova_compute[189608]: 2025-11-24 22:24:10.893 189613 INFO nova.compute.manager [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Terminating instance
Nov 24 22:24:10 compute-0 nova_compute[189608]: 2025-11-24 22:24:10.895 189613 DEBUG nova.compute.manager [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:24:10 compute-0 kernel: tap5430cfcb-55 (unregistering): left promiscuous mode
Nov 24 22:24:10 compute-0 NetworkManager[56413]: <info>  [1764023050.9524] device (tap5430cfcb-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:24:10 compute-0 nova_compute[189608]: 2025-11-24 22:24:10.967 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:10 compute-0 ovn_controller[97889]: 2025-11-24T22:24:10Z|00061|binding|INFO|Releasing lport 5430cfcb-550b-4518-9caa-0720f99730b9 from this chassis (sb_readonly=0)
Nov 24 22:24:10 compute-0 ovn_controller[97889]: 2025-11-24T22:24:10Z|00062|binding|INFO|Setting lport 5430cfcb-550b-4518-9caa-0720f99730b9 down in Southbound
Nov 24 22:24:10 compute-0 ovn_controller[97889]: 2025-11-24T22:24:10Z|00063|binding|INFO|Removing iface tap5430cfcb-55 ovn-installed in OVS
Nov 24 22:24:10 compute-0 nova_compute[189608]: 2025-11-24 22:24:10.972 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:10 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:10.994 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:21:ae 192.168.0.169'], port_security=['fa:16:3e:85:21:ae 192.168.0.169'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.169/24', 'neutron:device_id': 'ea741b45-c6b4-41c0-a70f-c752b616faa2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '309342b7e3e849b2a5dd56651d8fa068', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c15160ba-5e91-4c5e-8c90-4266712c07a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b82cef4e-b720-4309-9703-51bb202fedba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=5430cfcb-550b-4518-9caa-0720f99730b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:24:10 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:10.996 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 5430cfcb-550b-4518-9caa-0720f99730b9 in datapath 1d1b3625-954d-4d8b-8b3f-323c25d9b42a unbound from our chassis
Nov 24 22:24:10 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:10.997 106776 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1d1b3625-954d-4d8b-8b3f-323c25d9b42a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 22:24:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:10.998 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[f9df157a-b972-47a7-93e7-0bcf1060c319]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:24:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:10.999 106776 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a namespace which is not needed anymore
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.008 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:11 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 24 22:24:11 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 28.559s CPU time.
Nov 24 22:24:11 compute-0 systemd-machined[155884]: Machine qemu-1-instance-00000001 terminated.
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.188 189613 INFO nova.virt.libvirt.driver [-] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Instance destroyed successfully.
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.188 189613 DEBUG nova.objects.instance [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lazy-loading 'resources' on Instance uuid ea741b45-c6b4-41c0-a70f-c752b616faa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.198 189613 DEBUG nova.virt.libvirt.vif [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:04:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:04:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='309342b7e3e849b2a5dd56651d8fa068',ramdisk_id='',reservation_id='r-ju5wfah9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='a63b9561-12dc-4c11-858f-aa6fafbed036',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:04:47Z,user_data=None,user_id='572aaac113f54af8a894707849aed6bf',uuid=ea741b45-c6b4-41c0-a70f-c752b616faa2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.198 189613 DEBUG nova.network.os_vif_util [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converting VIF {"id": "5430cfcb-550b-4518-9caa-0720f99730b9", "address": "fa:16:3e:85:21:ae", "network": {"id": "1d1b3625-954d-4d8b-8b3f-323c25d9b42a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309342b7e3e849b2a5dd56651d8fa068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5430cfcb-55", "ovs_interfaceid": "5430cfcb-550b-4518-9caa-0720f99730b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.200 189613 DEBUG nova.network.os_vif_util [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:21:ae,bridge_name='br-int',has_traffic_filtering=True,id=5430cfcb-550b-4518-9caa-0720f99730b9,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5430cfcb-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.200 189613 DEBUG os_vif [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:21:ae,bridge_name='br-int',has_traffic_filtering=True,id=5430cfcb-550b-4518-9caa-0720f99730b9,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5430cfcb-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.202 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.203 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5430cfcb-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.207 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.211 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.214 189613 INFO os_vif [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:21:ae,bridge_name='br-int',has_traffic_filtering=True,id=5430cfcb-550b-4518-9caa-0720f99730b9,network=Network(1d1b3625-954d-4d8b-8b3f-323c25d9b42a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5430cfcb-55')
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.215 189613 INFO nova.virt.libvirt.driver [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Deleting instance files /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2_del
Nov 24 22:24:11 compute-0 neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a[240144]: [NOTICE]   (240148) : haproxy version is 2.8.14-c23fe91
Nov 24 22:24:11 compute-0 neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a[240144]: [NOTICE]   (240148) : path to executable is /usr/sbin/haproxy
Nov 24 22:24:11 compute-0 neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a[240144]: [WARNING]  (240148) : Exiting Master process...
Nov 24 22:24:11 compute-0 neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a[240144]: [WARNING]  (240148) : Exiting Master process...
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.216 189613 INFO nova.virt.libvirt.driver [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Deletion of /var/lib/nova/instances/ea741b45-c6b4-41c0-a70f-c752b616faa2_del complete
Nov 24 22:24:11 compute-0 neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a[240144]: [ALERT]    (240148) : Current worker (240150) exited with code 143 (Terminated)
Nov 24 22:24:11 compute-0 neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a[240144]: [WARNING]  (240148) : All workers exited. Exiting... (0)
Nov 24 22:24:11 compute-0 systemd[1]: libpod-eb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215.scope: Deactivated successfully.
Nov 24 22:24:11 compute-0 podman[249412]: 2025-11-24 22:24:11.228646058 +0000 UTC m=+0.079378699 container died eb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.229 189613 DEBUG nova.compute.manager [req-6d0a3540-6d58-45f6-9450-f302fa23b2be req-f27e92df-5bc9-4a00-9543-abc0830c8e5c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Received event network-vif-unplugged-5430cfcb-550b-4518-9caa-0720f99730b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.230 189613 DEBUG oslo_concurrency.lockutils [req-6d0a3540-6d58-45f6-9450-f302fa23b2be req-f27e92df-5bc9-4a00-9543-abc0830c8e5c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.231 189613 DEBUG oslo_concurrency.lockutils [req-6d0a3540-6d58-45f6-9450-f302fa23b2be req-f27e92df-5bc9-4a00-9543-abc0830c8e5c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.231 189613 DEBUG oslo_concurrency.lockutils [req-6d0a3540-6d58-45f6-9450-f302fa23b2be req-f27e92df-5bc9-4a00-9543-abc0830c8e5c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.231 189613 DEBUG nova.compute.manager [req-6d0a3540-6d58-45f6-9450-f302fa23b2be req-f27e92df-5bc9-4a00-9543-abc0830c8e5c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] No waiting events found dispatching network-vif-unplugged-5430cfcb-550b-4518-9caa-0720f99730b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.231 189613 DEBUG nova.compute.manager [req-6d0a3540-6d58-45f6-9450-f302fa23b2be req-f27e92df-5bc9-4a00-9543-abc0830c8e5c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Received event network-vif-unplugged-5430cfcb-550b-4518-9caa-0720f99730b9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.272 189613 INFO nova.compute.manager [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Took 0.38 seconds to destroy the instance on the hypervisor.
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.273 189613 DEBUG oslo.service.loopingcall [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.273 189613 DEBUG nova.compute.manager [-] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.273 189613 DEBUG nova.network.neutron [-] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:24:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215-userdata-shm.mount: Deactivated successfully.
Nov 24 22:24:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e52f81f2f4cb79e326dabd416d85f75720cb228a555093335894afbeace8297d-merged.mount: Deactivated successfully.
Nov 24 22:24:11 compute-0 podman[249412]: 2025-11-24 22:24:11.288776039 +0000 UTC m=+0.139508680 container cleanup eb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:24:11 compute-0 systemd[1]: libpod-conmon-eb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215.scope: Deactivated successfully.
Nov 24 22:24:11 compute-0 podman[249455]: 2025-11-24 22:24:11.393644479 +0000 UTC m=+0.073710652 container remove eb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 22:24:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:11.407 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[77f25f59-361e-4afb-b89a-28e7f8a5ef35]: (4, ('Mon Nov 24 10:24:11 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a (eb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215)\neb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215\nMon Nov 24 10:24:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a (eb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215)\neb66f21bbd8a50c8e293d8d8ae441358549f8de3aeaf641a97b2f4e46fc0b215\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:24:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:11.410 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[fe46d4b0-3b59-41aa-ba64-f44a33eea5bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:24:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:11.411 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d1b3625-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.415 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:11 compute-0 kernel: tap1d1b3625-90: left promiscuous mode
Nov 24 22:24:11 compute-0 nova_compute[189608]: 2025-11-24 22:24:11.430 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:11.434 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[b113f9f5-a96a-42b0-a2f9-e2d10198d6cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:24:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:11.454 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[9513613c-b508-4650-aebb-54163bb76d4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:24:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:11.457 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[98dc4e03-e95d-467e-90b8-64c38df95c2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:24:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:11.472 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[22c5866e-2a3d-4dc0-b772-4f0c0ada5d82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373773, 'reachable_time': 15364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249468, 'error': None, 'target': 'ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:24:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d1d1b3625\x2d954d\x2d4d8b\x2d8b3f\x2d323c25d9b42a.mount: Deactivated successfully.
Nov 24 22:24:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:11.487 106891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1d1b3625-954d-4d8b-8b3f-323c25d9b42a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 22:24:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:11.489 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0f9ffe-f6f7-4f78-ba39-dc2876806e6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:24:12 compute-0 nova_compute[189608]: 2025-11-24 22:24:12.072 189613 DEBUG nova.network.neutron [-] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:24:12 compute-0 nova_compute[189608]: 2025-11-24 22:24:12.101 189613 INFO nova.compute.manager [-] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Took 0.83 seconds to deallocate network for instance.
Nov 24 22:24:12 compute-0 nova_compute[189608]: 2025-11-24 22:24:12.142 189613 DEBUG oslo_concurrency.lockutils [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:24:12 compute-0 nova_compute[189608]: 2025-11-24 22:24:12.142 189613 DEBUG oslo_concurrency.lockutils [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:24:12 compute-0 nova_compute[189608]: 2025-11-24 22:24:12.172 189613 DEBUG nova.compute.manager [req-f23b61e1-f367-4c32-a8ef-e89e12c621bc req-27147642-1f56-45fb-9cc6-b4310a4fbf96 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Received event network-vif-deleted-5430cfcb-550b-4518-9caa-0720f99730b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:24:12 compute-0 nova_compute[189608]: 2025-11-24 22:24:12.212 189613 DEBUG nova.compute.provider_tree [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:24:12 compute-0 nova_compute[189608]: 2025-11-24 22:24:12.225 189613 DEBUG nova.scheduler.client.report [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:24:12 compute-0 nova_compute[189608]: 2025-11-24 22:24:12.249 189613 DEBUG oslo_concurrency.lockutils [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:24:12 compute-0 nova_compute[189608]: 2025-11-24 22:24:12.281 189613 INFO nova.scheduler.client.report [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Deleted allocations for instance ea741b45-c6b4-41c0-a70f-c752b616faa2
Nov 24 22:24:12 compute-0 nova_compute[189608]: 2025-11-24 22:24:12.337 189613 DEBUG oslo_concurrency.lockutils [None req-09e98edc-c2d8-41a6-be23-aa994098a380 572aaac113f54af8a894707849aed6bf 309342b7e3e849b2a5dd56651d8fa068 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:24:13 compute-0 nova_compute[189608]: 2025-11-24 22:24:13.327 189613 DEBUG nova.compute.manager [req-39bb6db3-33c6-4d71-9466-e31c3b703f59 req-a9ccc066-4f87-4196-bdd3-97af0290ab98 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Received event network-vif-plugged-5430cfcb-550b-4518-9caa-0720f99730b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:24:13 compute-0 nova_compute[189608]: 2025-11-24 22:24:13.327 189613 DEBUG oslo_concurrency.lockutils [req-39bb6db3-33c6-4d71-9466-e31c3b703f59 req-a9ccc066-4f87-4196-bdd3-97af0290ab98 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:24:13 compute-0 nova_compute[189608]: 2025-11-24 22:24:13.328 189613 DEBUG oslo_concurrency.lockutils [req-39bb6db3-33c6-4d71-9466-e31c3b703f59 req-a9ccc066-4f87-4196-bdd3-97af0290ab98 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:24:13 compute-0 nova_compute[189608]: 2025-11-24 22:24:13.328 189613 DEBUG oslo_concurrency.lockutils [req-39bb6db3-33c6-4d71-9466-e31c3b703f59 req-a9ccc066-4f87-4196-bdd3-97af0290ab98 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "ea741b45-c6b4-41c0-a70f-c752b616faa2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:24:13 compute-0 nova_compute[189608]: 2025-11-24 22:24:13.328 189613 DEBUG nova.compute.manager [req-39bb6db3-33c6-4d71-9466-e31c3b703f59 req-a9ccc066-4f87-4196-bdd3-97af0290ab98 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] No waiting events found dispatching network-vif-plugged-5430cfcb-550b-4518-9caa-0720f99730b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:24:13 compute-0 nova_compute[189608]: 2025-11-24 22:24:13.329 189613 WARNING nova.compute.manager [req-39bb6db3-33c6-4d71-9466-e31c3b703f59 req-a9ccc066-4f87-4196-bdd3-97af0290ab98 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Received unexpected event network-vif-plugged-5430cfcb-550b-4518-9caa-0720f99730b9 for instance with vm_state deleted and task_state None.
Nov 24 22:24:13 compute-0 nova_compute[189608]: 2025-11-24 22:24:13.764 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:14 compute-0 podman[249470]: 2025-11-24 22:24:14.559980407 +0000 UTC m=+0.109564659 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:24:16 compute-0 nova_compute[189608]: 2025-11-24 22:24:16.209 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:16 compute-0 podman[249494]: 2025-11-24 22:24:16.606617614 +0000 UTC m=+0.149574372 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 24 22:24:16 compute-0 podman[249493]: 2025-11-24 22:24:16.622271602 +0000 UTC m=+0.172208317 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller)
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.627 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.628 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.628 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.630 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.631 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.632 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.634 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.633 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.634 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.638 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.638 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.639 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.639 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.640 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.640 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.641 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56b43e90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.642 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.643 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.643 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.643 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.644 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.644 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.645 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.645 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.645 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.646 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.646 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.646 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.647 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.647 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.647 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.648 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.648 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.649 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.649 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.649 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.649 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.649 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.649 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.649 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.649 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:24:17.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:24:18 compute-0 nova_compute[189608]: 2025-11-24 22:24:18.769 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:21 compute-0 nova_compute[189608]: 2025-11-24 22:24:21.213 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:23 compute-0 nova_compute[189608]: 2025-11-24 22:24:23.771 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:24 compute-0 nova_compute[189608]: 2025-11-24 22:24:24.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:24:24 compute-0 nova_compute[189608]: 2025-11-24 22:24:24.795 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:24:24 compute-0 nova_compute[189608]: 2025-11-24 22:24:24.825 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:24:25 compute-0 podman[249538]: 2025-11-24 22:24:25.564887701 +0000 UTC m=+0.112196981 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:24:26 compute-0 nova_compute[189608]: 2025-11-24 22:24:26.185 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764023051.1835067, ea741b45-c6b4-41c0-a70f-c752b616faa2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:24:26 compute-0 nova_compute[189608]: 2025-11-24 22:24:26.186 189613 INFO nova.compute.manager [-] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] VM Stopped (Lifecycle Event)
Nov 24 22:24:26 compute-0 nova_compute[189608]: 2025-11-24 22:24:26.207 189613 DEBUG nova.compute.manager [None req-cd6398af-581a-4594-b1f3-effa46dae0a0 - - - - - -] [instance: ea741b45-c6b4-41c0-a70f-c752b616faa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:24:26 compute-0 nova_compute[189608]: 2025-11-24 22:24:26.217 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:28 compute-0 nova_compute[189608]: 2025-11-24 22:24:28.774 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:28 compute-0 nova_compute[189608]: 2025-11-24 22:24:28.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:24:28 compute-0 nova_compute[189608]: 2025-11-24 22:24:28.837 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:24:28 compute-0 nova_compute[189608]: 2025-11-24 22:24:28.838 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:24:28 compute-0 nova_compute[189608]: 2025-11-24 22:24:28.838 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:24:28 compute-0 nova_compute[189608]: 2025-11-24 22:24:28.839 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:24:29 compute-0 nova_compute[189608]: 2025-11-24 22:24:29.439 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:24:29 compute-0 nova_compute[189608]: 2025-11-24 22:24:29.441 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5379MB free_disk=72.20023345947266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:24:29 compute-0 nova_compute[189608]: 2025-11-24 22:24:29.441 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:24:29 compute-0 nova_compute[189608]: 2025-11-24 22:24:29.442 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:24:29 compute-0 nova_compute[189608]: 2025-11-24 22:24:29.530 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:24:29 compute-0 nova_compute[189608]: 2025-11-24 22:24:29.531 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:24:29 compute-0 nova_compute[189608]: 2025-11-24 22:24:29.569 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:24:29 compute-0 nova_compute[189608]: 2025-11-24 22:24:29.589 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:24:29 compute-0 nova_compute[189608]: 2025-11-24 22:24:29.621 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:24:29 compute-0 nova_compute[189608]: 2025-11-24 22:24:29.622 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:24:29 compute-0 podman[203795]: time="2025-11-24T22:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:24:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:24:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4325 "" "Go-http-client/1.1"
Nov 24 22:24:30 compute-0 nova_compute[189608]: 2025-11-24 22:24:30.618 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:24:30 compute-0 nova_compute[189608]: 2025-11-24 22:24:30.619 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:24:31 compute-0 nova_compute[189608]: 2025-11-24 22:24:31.223 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:31 compute-0 openstack_network_exporter[205945]: ERROR   22:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:24:31 compute-0 openstack_network_exporter[205945]: ERROR   22:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:24:31 compute-0 openstack_network_exporter[205945]: ERROR   22:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:24:31 compute-0 openstack_network_exporter[205945]: ERROR   22:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:24:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:24:31 compute-0 openstack_network_exporter[205945]: ERROR   22:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:24:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:24:32 compute-0 podman[249563]: 2025-11-24 22:24:32.587508094 +0000 UTC m=+0.128426224 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:24:32 compute-0 nova_compute[189608]: 2025-11-24 22:24:32.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:24:33 compute-0 sshd-session[249583]: Invalid user sol from 193.32.162.145 port 51636
Nov 24 22:24:33 compute-0 sshd-session[249583]: Connection closed by invalid user sol 193.32.162.145 port 51636 [preauth]
Nov 24 22:24:33 compute-0 nova_compute[189608]: 2025-11-24 22:24:33.776 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:33 compute-0 nova_compute[189608]: 2025-11-24 22:24:33.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:24:34 compute-0 nova_compute[189608]: 2025-11-24 22:24:34.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:24:35 compute-0 podman[249585]: 2025-11-24 22:24:35.588696186 +0000 UTC m=+0.134917767 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 24 22:24:36 compute-0 nova_compute[189608]: 2025-11-24 22:24:36.228 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:36 compute-0 nova_compute[189608]: 2025-11-24 22:24:36.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:24:37 compute-0 nova_compute[189608]: 2025-11-24 22:24:37.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:24:37 compute-0 nova_compute[189608]: 2025-11-24 22:24:37.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:24:38 compute-0 nova_compute[189608]: 2025-11-24 22:24:38.780 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:41 compute-0 nova_compute[189608]: 2025-11-24 22:24:41.232 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:41 compute-0 podman[249606]: 2025-11-24 22:24:41.586995972 +0000 UTC m=+0.125332869 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, io.openshift.tags=minimal rhel9, name=ubi9-minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container)
Nov 24 22:24:41 compute-0 podman[249607]: 2025-11-24 22:24:41.594527817 +0000 UTC m=+0.123677468 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi)
Nov 24 22:24:41 compute-0 podman[249605]: 2025-11-24 22:24:41.612107763 +0000 UTC m=+0.155827087 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, release=1214.1726694543, distribution-scope=public, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, name=ubi9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, vcs-type=git, io.buildah.version=1.29.0, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 22:24:42 compute-0 ovn_controller[97889]: 2025-11-24T22:24:42Z|00064|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Nov 24 22:24:42 compute-0 sshd-session[249663]: Invalid user sol from 45.148.10.240 port 60742
Nov 24 22:24:42 compute-0 sshd-session[249663]: Connection closed by invalid user sol 45.148.10.240 port 60742 [preauth]
Nov 24 22:24:43 compute-0 nova_compute[189608]: 2025-11-24 22:24:43.782 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:44 compute-0 podman[249665]: 2025-11-24 22:24:44.822325146 +0000 UTC m=+0.115026898 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:24:46 compute-0 nova_compute[189608]: 2025-11-24 22:24:46.236 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:47 compute-0 podman[249689]: 2025-11-24 22:24:47.568609731 +0000 UTC m=+0.103617905 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 24 22:24:47 compute-0 podman[249688]: 2025-11-24 22:24:47.641677533 +0000 UTC m=+0.189481955 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 22:24:48 compute-0 nova_compute[189608]: 2025-11-24 22:24:48.784 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:51 compute-0 nova_compute[189608]: 2025-11-24 22:24:51.242 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:53 compute-0 nova_compute[189608]: 2025-11-24 22:24:53.788 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:54.587 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:24:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:54.587 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:24:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:24:54.587 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:24:56 compute-0 nova_compute[189608]: 2025-11-24 22:24:56.248 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:56 compute-0 podman[249733]: 2025-11-24 22:24:56.59783681 +0000 UTC m=+0.146934679 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:24:58 compute-0 nova_compute[189608]: 2025-11-24 22:24:58.790 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:24:59 compute-0 podman[203795]: time="2025-11-24T22:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:24:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:24:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Nov 24 22:25:01 compute-0 nova_compute[189608]: 2025-11-24 22:25:01.253 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:01 compute-0 openstack_network_exporter[205945]: ERROR   22:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:25:01 compute-0 openstack_network_exporter[205945]: ERROR   22:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:25:01 compute-0 openstack_network_exporter[205945]: ERROR   22:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:25:01 compute-0 openstack_network_exporter[205945]: ERROR   22:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:25:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:25:01 compute-0 openstack_network_exporter[205945]: ERROR   22:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:25:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:25:03 compute-0 podman[249758]: 2025-11-24 22:25:03.578585881 +0000 UTC m=+0.121734614 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 22:25:03 compute-0 nova_compute[189608]: 2025-11-24 22:25:03.794 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:06 compute-0 nova_compute[189608]: 2025-11-24 22:25:06.257 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:06 compute-0 podman[249778]: 2025-11-24 22:25:06.544796845 +0000 UTC m=+0.104170016 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 24 22:25:08 compute-0 nova_compute[189608]: 2025-11-24 22:25:08.797 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:11 compute-0 nova_compute[189608]: 2025-11-24 22:25:11.262 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:11 compute-0 nova_compute[189608]: 2025-11-24 22:25:11.762 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:12 compute-0 podman[249798]: 2025-11-24 22:25:12.569980676 +0000 UTC m=+0.109203793 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2024-09-18T21:23:30, name=ubi9, vcs-type=git, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0)
Nov 24 22:25:12 compute-0 podman[249799]: 2025-11-24 22:25:12.577119579 +0000 UTC m=+0.108882853 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 22:25:12 compute-0 podman[249800]: 2025-11-24 22:25:12.603864782 +0000 UTC m=+0.129410363 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:25:13 compute-0 nova_compute[189608]: 2025-11-24 22:25:13.800 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:15 compute-0 podman[249852]: 2025-11-24 22:25:15.53422563 +0000 UTC m=+0.092713498 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:25:16 compute-0 nova_compute[189608]: 2025-11-24 22:25:16.266 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:18 compute-0 podman[249878]: 2025-11-24 22:25:18.537656796 +0000 UTC m=+0.084980349 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 22:25:18 compute-0 podman[249877]: 2025-11-24 22:25:18.609543715 +0000 UTC m=+0.155912548 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 22:25:18 compute-0 nova_compute[189608]: 2025-11-24 22:25:18.804 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:21 compute-0 nova_compute[189608]: 2025-11-24 22:25:21.270 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:21 compute-0 nova_compute[189608]: 2025-11-24 22:25:21.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:23 compute-0 nova_compute[189608]: 2025-11-24 22:25:23.809 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:25 compute-0 nova_compute[189608]: 2025-11-24 22:25:25.798 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:25 compute-0 nova_compute[189608]: 2025-11-24 22:25:25.799 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:25:25 compute-0 nova_compute[189608]: 2025-11-24 22:25:25.800 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:25:25 compute-0 nova_compute[189608]: 2025-11-24 22:25:25.821 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:25:26 compute-0 nova_compute[189608]: 2025-11-24 22:25:26.276 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:27 compute-0 podman[249922]: 2025-11-24 22:25:27.541583023 +0000 UTC m=+0.093111291 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:25:27 compute-0 nova_compute[189608]: 2025-11-24 22:25:27.723 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:28 compute-0 nova_compute[189608]: 2025-11-24 22:25:28.813 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:29 compute-0 podman[203795]: time="2025-11-24T22:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:25:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:25:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4334 "" "Go-http-client/1.1"
Nov 24 22:25:29 compute-0 nova_compute[189608]: 2025-11-24 22:25:29.808 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:30 compute-0 nova_compute[189608]: 2025-11-24 22:25:30.796 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:30 compute-0 nova_compute[189608]: 2025-11-24 22:25:30.821 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:25:30 compute-0 nova_compute[189608]: 2025-11-24 22:25:30.821 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:25:30 compute-0 nova_compute[189608]: 2025-11-24 22:25:30.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:25:30 compute-0 nova_compute[189608]: 2025-11-24 22:25:30.822 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:25:31 compute-0 nova_compute[189608]: 2025-11-24 22:25:31.281 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:31 compute-0 nova_compute[189608]: 2025-11-24 22:25:31.336 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:25:31 compute-0 nova_compute[189608]: 2025-11-24 22:25:31.338 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5377MB free_disk=72.20022964477539GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:25:31 compute-0 nova_compute[189608]: 2025-11-24 22:25:31.339 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:25:31 compute-0 nova_compute[189608]: 2025-11-24 22:25:31.339 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:25:31 compute-0 openstack_network_exporter[205945]: ERROR   22:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:25:31 compute-0 openstack_network_exporter[205945]: ERROR   22:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:25:31 compute-0 openstack_network_exporter[205945]: ERROR   22:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:25:31 compute-0 openstack_network_exporter[205945]: ERROR   22:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:25:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:25:31 compute-0 openstack_network_exporter[205945]: ERROR   22:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:25:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:25:31 compute-0 nova_compute[189608]: 2025-11-24 22:25:31.680 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:25:31 compute-0 nova_compute[189608]: 2025-11-24 22:25:31.681 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:25:31 compute-0 nova_compute[189608]: 2025-11-24 22:25:31.816 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing inventories for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 22:25:31 compute-0 nova_compute[189608]: 2025-11-24 22:25:31.925 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating ProviderTree inventory for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 22:25:31 compute-0 nova_compute[189608]: 2025-11-24 22:25:31.925 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating inventory in ProviderTree for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 22:25:31 compute-0 nova_compute[189608]: 2025-11-24 22:25:31.946 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing aggregate associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 22:25:31 compute-0 nova_compute[189608]: 2025-11-24 22:25:31.980 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing trait associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, traits: HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 22:25:32 compute-0 nova_compute[189608]: 2025-11-24 22:25:32.017 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:25:32 compute-0 nova_compute[189608]: 2025-11-24 22:25:32.033 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:25:32 compute-0 nova_compute[189608]: 2025-11-24 22:25:32.036 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:25:32 compute-0 nova_compute[189608]: 2025-11-24 22:25:32.037 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:25:32 compute-0 nova_compute[189608]: 2025-11-24 22:25:32.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:32 compute-0 nova_compute[189608]: 2025-11-24 22:25:32.796 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:32 compute-0 nova_compute[189608]: 2025-11-24 22:25:32.797 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 22:25:33 compute-0 nova_compute[189608]: 2025-11-24 22:25:33.810 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:33 compute-0 nova_compute[189608]: 2025-11-24 22:25:33.815 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:34 compute-0 podman[249944]: 2025-11-24 22:25:34.585603585 +0000 UTC m=+0.139817317 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 24 22:25:35 compute-0 nova_compute[189608]: 2025-11-24 22:25:35.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:35 compute-0 nova_compute[189608]: 2025-11-24 22:25:35.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:36 compute-0 nova_compute[189608]: 2025-11-24 22:25:36.285 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:36 compute-0 nova_compute[189608]: 2025-11-24 22:25:36.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:37 compute-0 podman[249963]: 2025-11-24 22:25:37.588481793 +0000 UTC m=+0.136553986 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true)
Nov 24 22:25:38 compute-0 nova_compute[189608]: 2025-11-24 22:25:38.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:38 compute-0 nova_compute[189608]: 2025-11-24 22:25:38.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 22:25:38 compute-0 nova_compute[189608]: 2025-11-24 22:25:38.805 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 22:25:38 compute-0 nova_compute[189608]: 2025-11-24 22:25:38.819 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:39 compute-0 nova_compute[189608]: 2025-11-24 22:25:39.806 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:39 compute-0 nova_compute[189608]: 2025-11-24 22:25:39.806 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:25:41 compute-0 nova_compute[189608]: 2025-11-24 22:25:41.293 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:41 compute-0 nova_compute[189608]: 2025-11-24 22:25:41.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:25:43 compute-0 podman[249982]: 2025-11-24 22:25:43.552938362 +0000 UTC m=+0.110530514 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, distribution-scope=public, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, vcs-type=git, container_name=kepler)
Nov 24 22:25:43 compute-0 podman[249983]: 2025-11-24 22:25:43.56185208 +0000 UTC m=+0.113976722 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, container_name=openstack_network_exporter, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Nov 24 22:25:43 compute-0 podman[249984]: 2025-11-24 22:25:43.578200569 +0000 UTC m=+0.118349458 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Nov 24 22:25:43 compute-0 nova_compute[189608]: 2025-11-24 22:25:43.822 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:46 compute-0 nova_compute[189608]: 2025-11-24 22:25:46.298 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:46 compute-0 podman[250042]: 2025-11-24 22:25:46.503714257 +0000 UTC m=+0.065295945 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:25:47 compute-0 sshd-session[250041]: Connection closed by authenticating user root 80.94.95.115 port 33716 [preauth]
Nov 24 22:25:48 compute-0 nova_compute[189608]: 2025-11-24 22:25:48.825 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:49 compute-0 podman[250070]: 2025-11-24 22:25:49.585958867 +0000 UTC m=+0.127148582 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:25:49 compute-0 podman[250069]: 2025-11-24 22:25:49.646929956 +0000 UTC m=+0.192622322 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 24 22:25:51 compute-0 nova_compute[189608]: 2025-11-24 22:25:51.302 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:53 compute-0 nova_compute[189608]: 2025-11-24 22:25:53.829 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:25:54.589 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:25:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:25:54.590 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:25:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:25:54.590 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:25:56 compute-0 nova_compute[189608]: 2025-11-24 22:25:56.307 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:58 compute-0 podman[250113]: 2025-11-24 22:25:58.550036582 +0000 UTC m=+0.108293474 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:25:58 compute-0 nova_compute[189608]: 2025-11-24 22:25:58.831 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:25:59 compute-0 podman[203795]: time="2025-11-24T22:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:25:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:25:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Nov 24 22:26:01 compute-0 nova_compute[189608]: 2025-11-24 22:26:01.311 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:01 compute-0 openstack_network_exporter[205945]: ERROR   22:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:26:01 compute-0 openstack_network_exporter[205945]: ERROR   22:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:26:01 compute-0 openstack_network_exporter[205945]: ERROR   22:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:26:01 compute-0 openstack_network_exporter[205945]: ERROR   22:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:26:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:26:01 compute-0 openstack_network_exporter[205945]: ERROR   22:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:26:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:26:03 compute-0 nova_compute[189608]: 2025-11-24 22:26:03.837 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:05 compute-0 podman[250137]: 2025-11-24 22:26:05.561536051 +0000 UTC m=+0.116068248 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd)
Nov 24 22:26:06 compute-0 nova_compute[189608]: 2025-11-24 22:26:06.315 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:08 compute-0 podman[250155]: 2025-11-24 22:26:08.583195271 +0000 UTC m=+0.131477327 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 22:26:08 compute-0 nova_compute[189608]: 2025-11-24 22:26:08.845 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:11 compute-0 nova_compute[189608]: 2025-11-24 22:26:11.319 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:13 compute-0 nova_compute[189608]: 2025-11-24 22:26:13.849 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:14 compute-0 podman[250175]: 2025-11-24 22:26:14.585968575 +0000 UTC m=+0.130498147 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_id=edpm, name=ubi9, build-date=2024-09-18T21:23:30, container_name=kepler)
Nov 24 22:26:14 compute-0 podman[250177]: 2025-11-24 22:26:14.588092371 +0000 UTC m=+0.116282624 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:26:14 compute-0 podman[250176]: 2025-11-24 22:26:14.588678619 +0000 UTC m=+0.127866715 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41)
Nov 24 22:26:16 compute-0 nova_compute[189608]: 2025-11-24 22:26:16.324 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:17 compute-0 podman[250234]: 2025-11-24 22:26:17.55474638 +0000 UTC m=+0.104818947 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.632 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.632 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.632 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.634 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.636 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.637 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.638 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.639 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.640 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.640 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.641 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.642 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.642 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.642 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.642 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.642 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.642 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.643 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.643 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.644 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.644 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.644 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.645 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.645 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.645 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.646 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.646 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.646 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.647 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.647 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.647 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.648 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.648 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.648 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'power.state': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.649 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.649 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.650 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.651 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.651 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.651 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.651 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:26:17.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:26:18 compute-0 nova_compute[189608]: 2025-11-24 22:26:18.852 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:20 compute-0 podman[250261]: 2025-11-24 22:26:20.577970522 +0000 UTC m=+0.115435388 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:26:20 compute-0 podman[250260]: 2025-11-24 22:26:20.659102089 +0000 UTC m=+0.200157697 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 22:26:21 compute-0 nova_compute[189608]: 2025-11-24 22:26:21.328 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:23 compute-0 nova_compute[189608]: 2025-11-24 22:26:23.855 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:25 compute-0 nova_compute[189608]: 2025-11-24 22:26:25.811 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:26:25 compute-0 nova_compute[189608]: 2025-11-24 22:26:25.812 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:26:25 compute-0 nova_compute[189608]: 2025-11-24 22:26:25.812 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:26:25 compute-0 nova_compute[189608]: 2025-11-24 22:26:25.830 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:26:26 compute-0 nova_compute[189608]: 2025-11-24 22:26:26.333 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:28 compute-0 nova_compute[189608]: 2025-11-24 22:26:28.857 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:29 compute-0 podman[250306]: 2025-11-24 22:26:29.549182708 +0000 UTC m=+0.099755338 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:26:29 compute-0 podman[203795]: time="2025-11-24T22:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:26:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:26:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Nov 24 22:26:30 compute-0 nova_compute[189608]: 2025-11-24 22:26:30.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:26:30 compute-0 nova_compute[189608]: 2025-11-24 22:26:30.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:26:30 compute-0 nova_compute[189608]: 2025-11-24 22:26:30.823 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:26:30 compute-0 nova_compute[189608]: 2025-11-24 22:26:30.823 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:26:30 compute-0 nova_compute[189608]: 2025-11-24 22:26:30.824 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:26:31 compute-0 nova_compute[189608]: 2025-11-24 22:26:31.254 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:26:31 compute-0 nova_compute[189608]: 2025-11-24 22:26:31.255 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5373MB free_disk=72.20025253295898GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:26:31 compute-0 nova_compute[189608]: 2025-11-24 22:26:31.256 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:26:31 compute-0 nova_compute[189608]: 2025-11-24 22:26:31.256 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:26:31 compute-0 nova_compute[189608]: 2025-11-24 22:26:31.334 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:26:31 compute-0 nova_compute[189608]: 2025-11-24 22:26:31.336 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:26:31 compute-0 nova_compute[189608]: 2025-11-24 22:26:31.342 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:31 compute-0 nova_compute[189608]: 2025-11-24 22:26:31.379 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:26:31 compute-0 nova_compute[189608]: 2025-11-24 22:26:31.396 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:26:31 compute-0 nova_compute[189608]: 2025-11-24 22:26:31.399 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:26:31 compute-0 nova_compute[189608]: 2025-11-24 22:26:31.400 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:26:31 compute-0 openstack_network_exporter[205945]: ERROR   22:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:26:31 compute-0 openstack_network_exporter[205945]: ERROR   22:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:26:31 compute-0 openstack_network_exporter[205945]: ERROR   22:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:26:31 compute-0 openstack_network_exporter[205945]: ERROR   22:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:26:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:26:31 compute-0 openstack_network_exporter[205945]: ERROR   22:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:26:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:26:32 compute-0 nova_compute[189608]: 2025-11-24 22:26:32.397 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:26:33 compute-0 nova_compute[189608]: 2025-11-24 22:26:33.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:26:33 compute-0 nova_compute[189608]: 2025-11-24 22:26:33.860 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:35 compute-0 nova_compute[189608]: 2025-11-24 22:26:35.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:26:35 compute-0 nova_compute[189608]: 2025-11-24 22:26:35.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:26:36 compute-0 nova_compute[189608]: 2025-11-24 22:26:36.347 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:36 compute-0 podman[250331]: 2025-11-24 22:26:36.587192465 +0000 UTC m=+0.130374303 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:26:36 compute-0 nova_compute[189608]: 2025-11-24 22:26:36.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:26:38 compute-0 nova_compute[189608]: 2025-11-24 22:26:38.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:26:38 compute-0 nova_compute[189608]: 2025-11-24 22:26:38.863 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:39 compute-0 podman[250351]: 2025-11-24 22:26:39.579482274 +0000 UTC m=+0.134471961 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 24 22:26:39 compute-0 nova_compute[189608]: 2025-11-24 22:26:39.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:26:39 compute-0 nova_compute[189608]: 2025-11-24 22:26:39.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:26:41 compute-0 nova_compute[189608]: 2025-11-24 22:26:41.351 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:43 compute-0 nova_compute[189608]: 2025-11-24 22:26:43.867 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:44 compute-0 podman[250375]: 2025-11-24 22:26:44.816205589 +0000 UTC m=+0.099955455 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:26:44 compute-0 podman[250374]: 2025-11-24 22:26:44.821549605 +0000 UTC m=+0.100438990 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 22:26:44 compute-0 podman[250373]: 2025-11-24 22:26:44.823081083 +0000 UTC m=+0.109361478 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release=1214.1726694543, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, distribution-scope=public, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_id=edpm)
Nov 24 22:26:44 compute-0 sshd-session[250371]: Invalid user sol from 45.148.10.240 port 57388
Nov 24 22:26:45 compute-0 sshd-session[250371]: Connection closed by invalid user sol 45.148.10.240 port 57388 [preauth]
Nov 24 22:26:46 compute-0 nova_compute[189608]: 2025-11-24 22:26:46.355 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:48 compute-0 podman[250428]: 2025-11-24 22:26:48.553762556 +0000 UTC m=+0.094461064 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:26:48 compute-0 nova_compute[189608]: 2025-11-24 22:26:48.869 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:51 compute-0 nova_compute[189608]: 2025-11-24 22:26:51.360 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:51 compute-0 podman[250453]: 2025-11-24 22:26:51.604166644 +0000 UTC m=+0.146707171 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 22:26:51 compute-0 podman[250452]: 2025-11-24 22:26:51.631185626 +0000 UTC m=+0.184008694 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 24 22:26:53 compute-0 nova_compute[189608]: 2025-11-24 22:26:53.871 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:26:54.590 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:26:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:26:54.591 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:26:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:26:54.591 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:26:56 compute-0 nova_compute[189608]: 2025-11-24 22:26:56.364 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:58 compute-0 nova_compute[189608]: 2025-11-24 22:26:58.873 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:26:59 compute-0 podman[203795]: time="2025-11-24T22:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:26:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:26:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Nov 24 22:27:00 compute-0 podman[250495]: 2025-11-24 22:27:00.574155924 +0000 UTC m=+0.117508193 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:27:01 compute-0 nova_compute[189608]: 2025-11-24 22:27:01.371 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:01 compute-0 openstack_network_exporter[205945]: ERROR   22:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:27:01 compute-0 openstack_network_exporter[205945]: ERROR   22:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:27:01 compute-0 openstack_network_exporter[205945]: ERROR   22:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:27:01 compute-0 openstack_network_exporter[205945]: ERROR   22:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:27:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:27:01 compute-0 openstack_network_exporter[205945]: ERROR   22:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:27:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:27:03 compute-0 nova_compute[189608]: 2025-11-24 22:27:03.877 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:06 compute-0 nova_compute[189608]: 2025-11-24 22:27:06.382 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:07 compute-0 podman[250518]: 2025-11-24 22:27:07.596519883 +0000 UTC m=+0.138344972 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:27:08 compute-0 nova_compute[189608]: 2025-11-24 22:27:08.882 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:10 compute-0 podman[250537]: 2025-11-24 22:27:10.599579005 +0000 UTC m=+0.140836518 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 22:27:11 compute-0 nova_compute[189608]: 2025-11-24 22:27:11.386 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:13 compute-0 nova_compute[189608]: 2025-11-24 22:27:13.888 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:15 compute-0 podman[250555]: 2025-11-24 22:27:15.537911356 +0000 UTC m=+0.099060127 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, container_name=kepler, io.buildah.version=1.29.0, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9)
Nov 24 22:27:15 compute-0 podman[250556]: 2025-11-24 22:27:15.56529233 +0000 UTC m=+0.108711219 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Nov 24 22:27:15 compute-0 podman[250557]: 2025-11-24 22:27:15.59546752 +0000 UTC m=+0.133457519 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:27:16 compute-0 nova_compute[189608]: 2025-11-24 22:27:16.392 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:18 compute-0 nova_compute[189608]: 2025-11-24 22:27:18.893 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:19 compute-0 podman[250614]: 2025-11-24 22:27:19.553140185 +0000 UTC m=+0.106207950 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 24 22:27:21 compute-0 nova_compute[189608]: 2025-11-24 22:27:21.396 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:22 compute-0 podman[250638]: 2025-11-24 22:27:22.570586817 +0000 UTC m=+0.109771041 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 22:27:22 compute-0 podman[250637]: 2025-11-24 22:27:22.612507843 +0000 UTC m=+0.157334642 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller)
Nov 24 22:27:22 compute-0 nova_compute[189608]: 2025-11-24 22:27:22.791 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:27:23 compute-0 nova_compute[189608]: 2025-11-24 22:27:23.895 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:26 compute-0 nova_compute[189608]: 2025-11-24 22:27:26.402 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:26 compute-0 nova_compute[189608]: 2025-11-24 22:27:26.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:27:26 compute-0 nova_compute[189608]: 2025-11-24 22:27:26.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:27:26 compute-0 nova_compute[189608]: 2025-11-24 22:27:26.795 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:27:26 compute-0 nova_compute[189608]: 2025-11-24 22:27:26.813 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:27:28 compute-0 nova_compute[189608]: 2025-11-24 22:27:28.899 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:29 compute-0 podman[203795]: time="2025-11-24T22:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:27:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:27:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4337 "" "Go-http-client/1.1"
Nov 24 22:27:31 compute-0 nova_compute[189608]: 2025-11-24 22:27:31.407 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:31 compute-0 openstack_network_exporter[205945]: ERROR   22:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:27:31 compute-0 openstack_network_exporter[205945]: ERROR   22:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:27:31 compute-0 openstack_network_exporter[205945]: ERROR   22:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:27:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:27:31 compute-0 openstack_network_exporter[205945]: ERROR   22:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:27:31 compute-0 openstack_network_exporter[205945]: ERROR   22:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:27:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:27:31 compute-0 podman[250679]: 2025-11-24 22:27:31.540312338 +0000 UTC m=+0.095045492 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:27:31 compute-0 nova_compute[189608]: 2025-11-24 22:27:31.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:27:31 compute-0 nova_compute[189608]: 2025-11-24 22:27:31.823 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:27:31 compute-0 nova_compute[189608]: 2025-11-24 22:27:31.823 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:27:31 compute-0 nova_compute[189608]: 2025-11-24 22:27:31.824 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:27:31 compute-0 nova_compute[189608]: 2025-11-24 22:27:31.824 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:27:32 compute-0 nova_compute[189608]: 2025-11-24 22:27:32.310 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:27:32 compute-0 nova_compute[189608]: 2025-11-24 22:27:32.312 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5381MB free_disk=72.20025253295898GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:27:32 compute-0 nova_compute[189608]: 2025-11-24 22:27:32.313 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:27:32 compute-0 nova_compute[189608]: 2025-11-24 22:27:32.314 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:27:32 compute-0 nova_compute[189608]: 2025-11-24 22:27:32.381 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:27:32 compute-0 nova_compute[189608]: 2025-11-24 22:27:32.382 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:27:32 compute-0 nova_compute[189608]: 2025-11-24 22:27:32.418 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:27:32 compute-0 nova_compute[189608]: 2025-11-24 22:27:32.434 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:27:32 compute-0 nova_compute[189608]: 2025-11-24 22:27:32.441 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:27:32 compute-0 nova_compute[189608]: 2025-11-24 22:27:32.442 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:27:33 compute-0 nova_compute[189608]: 2025-11-24 22:27:33.439 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:27:33 compute-0 nova_compute[189608]: 2025-11-24 22:27:33.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:27:33 compute-0 nova_compute[189608]: 2025-11-24 22:27:33.902 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:36 compute-0 nova_compute[189608]: 2025-11-24 22:27:36.412 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:36 compute-0 nova_compute[189608]: 2025-11-24 22:27:36.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:27:36 compute-0 nova_compute[189608]: 2025-11-24 22:27:36.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:27:38 compute-0 podman[250702]: 2025-11-24 22:27:38.571726311 +0000 UTC m=+0.120217397 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 22:27:38 compute-0 nova_compute[189608]: 2025-11-24 22:27:38.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:27:38 compute-0 nova_compute[189608]: 2025-11-24 22:27:38.905 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:39 compute-0 nova_compute[189608]: 2025-11-24 22:27:39.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:27:40 compute-0 nova_compute[189608]: 2025-11-24 22:27:40.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:27:40 compute-0 nova_compute[189608]: 2025-11-24 22:27:40.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:27:41 compute-0 nova_compute[189608]: 2025-11-24 22:27:41.417 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:41 compute-0 podman[250722]: 2025-11-24 22:27:41.601083663 +0000 UTC m=+0.150853181 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm)
Nov 24 22:27:43 compute-0 nova_compute[189608]: 2025-11-24 22:27:43.907 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:46 compute-0 nova_compute[189608]: 2025-11-24 22:27:46.422 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:46 compute-0 podman[250742]: 2025-11-24 22:27:46.59197872 +0000 UTC m=+0.128368090 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 24 22:27:46 compute-0 podman[250743]: 2025-11-24 22:27:46.59774706 +0000 UTC m=+0.125068987 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64)
Nov 24 22:27:46 compute-0 podman[250744]: 2025-11-24 22:27:46.63494964 +0000 UTC m=+0.154504355 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 22:27:48 compute-0 sshd-session[250801]: Invalid user sol from 193.32.162.145 port 34104
Nov 24 22:27:48 compute-0 sshd-session[250801]: Connection closed by invalid user sol 193.32.162.145 port 34104 [preauth]
Nov 24 22:27:48 compute-0 nova_compute[189608]: 2025-11-24 22:27:48.910 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:50 compute-0 podman[250804]: 2025-11-24 22:27:50.567149201 +0000 UTC m=+0.117011656 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:27:51 compute-0 nova_compute[189608]: 2025-11-24 22:27:51.428 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:53 compute-0 podman[250830]: 2025-11-24 22:27:53.591095546 +0000 UTC m=+0.131920062 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:27:53 compute-0 podman[250829]: 2025-11-24 22:27:53.642669132 +0000 UTC m=+0.196916726 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 22:27:53 compute-0 nova_compute[189608]: 2025-11-24 22:27:53.914 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:27:54.592 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:27:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:27:54.593 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:27:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:27:54.593 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:27:56 compute-0 nova_compute[189608]: 2025-11-24 22:27:56.434 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:58 compute-0 nova_compute[189608]: 2025-11-24 22:27:58.918 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:27:59 compute-0 podman[203795]: time="2025-11-24T22:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:27:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:27:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4330 "" "Go-http-client/1.1"
Nov 24 22:28:01 compute-0 openstack_network_exporter[205945]: ERROR   22:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:28:01 compute-0 openstack_network_exporter[205945]: ERROR   22:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:28:01 compute-0 openstack_network_exporter[205945]: ERROR   22:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:28:01 compute-0 openstack_network_exporter[205945]: ERROR   22:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:28:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:28:01 compute-0 openstack_network_exporter[205945]: ERROR   22:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:28:01 compute-0 nova_compute[189608]: 2025-11-24 22:28:01.438 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:28:02 compute-0 podman[250871]: 2025-11-24 22:28:02.556204314 +0000 UTC m=+0.112398293 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:28:03 compute-0 nova_compute[189608]: 2025-11-24 22:28:03.922 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:06 compute-0 nova_compute[189608]: 2025-11-24 22:28:06.441 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:08 compute-0 nova_compute[189608]: 2025-11-24 22:28:08.926 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:09 compute-0 podman[250895]: 2025-11-24 22:28:09.052232283 +0000 UTC m=+0.091713778 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 24 22:28:10 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:28:10.416 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:28:10 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:28:10.418 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:28:10 compute-0 nova_compute[189608]: 2025-11-24 22:28:10.422 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:11 compute-0 nova_compute[189608]: 2025-11-24 22:28:11.447 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:12 compute-0 podman[250915]: 2025-11-24 22:28:12.558410933 +0000 UTC m=+0.109306717 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:28:13 compute-0 nova_compute[189608]: 2025-11-24 22:28:13.930 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:28:16.420 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:28:16 compute-0 nova_compute[189608]: 2025-11-24 22:28:16.452 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:17 compute-0 podman[250936]: 2025-11-24 22:28:17.566703662 +0000 UTC m=+0.112401923 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 24 22:28:17 compute-0 podman[250935]: 2025-11-24 22:28:17.569549981 +0000 UTC m=+0.113957801 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vendor=Red Hat, Inc., config_id=edpm, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, io.openshift.expose-services=)
Nov 24 22:28:17 compute-0 podman[250937]: 2025-11-24 22:28:17.599449812 +0000 UTC m=+0.130789586 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.632 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.633 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.633 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.639 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.642 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.642 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.642 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.642 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.642 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.643 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.643 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.643 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.644 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.644 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.644 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.645 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.645 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.645 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.646 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.646 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.646 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.647 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.647 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.648 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.648 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.648 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.649 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.649 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.649 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.650 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.650 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.651 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:28:17.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:28:18 compute-0 nova_compute[189608]: 2025-11-24 22:28:18.935 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:21 compute-0 nova_compute[189608]: 2025-11-24 22:28:21.456 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:21 compute-0 podman[250995]: 2025-11-24 22:28:21.610179592 +0000 UTC m=+0.154192545 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 24 22:28:23 compute-0 nova_compute[189608]: 2025-11-24 22:28:23.937 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:24 compute-0 podman[251019]: 2025-11-24 22:28:24.574495738 +0000 UTC m=+0.111794194 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 22:28:24 compute-0 podman[251018]: 2025-11-24 22:28:24.635745196 +0000 UTC m=+0.176934983 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 24 22:28:26 compute-0 nova_compute[189608]: 2025-11-24 22:28:26.461 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:26 compute-0 nova_compute[189608]: 2025-11-24 22:28:26.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:28:26 compute-0 nova_compute[189608]: 2025-11-24 22:28:26.797 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:28:26 compute-0 nova_compute[189608]: 2025-11-24 22:28:26.797 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:28:26 compute-0 nova_compute[189608]: 2025-11-24 22:28:26.820 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:28:28 compute-0 nova_compute[189608]: 2025-11-24 22:28:28.942 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:29 compute-0 podman[203795]: time="2025-11-24T22:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:28:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:28:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Nov 24 22:28:31 compute-0 openstack_network_exporter[205945]: ERROR   22:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:28:31 compute-0 openstack_network_exporter[205945]: ERROR   22:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:28:31 compute-0 openstack_network_exporter[205945]: ERROR   22:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:28:31 compute-0 openstack_network_exporter[205945]: ERROR   22:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:28:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:28:31 compute-0 openstack_network_exporter[205945]: ERROR   22:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:28:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:28:31 compute-0 nova_compute[189608]: 2025-11-24 22:28:31.465 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:31 compute-0 nova_compute[189608]: 2025-11-24 22:28:31.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:28:31 compute-0 nova_compute[189608]: 2025-11-24 22:28:31.834 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:28:31 compute-0 nova_compute[189608]: 2025-11-24 22:28:31.835 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:28:31 compute-0 nova_compute[189608]: 2025-11-24 22:28:31.836 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:28:31 compute-0 nova_compute[189608]: 2025-11-24 22:28:31.836 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:28:32 compute-0 nova_compute[189608]: 2025-11-24 22:28:32.382 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:28:32 compute-0 nova_compute[189608]: 2025-11-24 22:28:32.386 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5363MB free_disk=72.20025253295898GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:28:32 compute-0 nova_compute[189608]: 2025-11-24 22:28:32.387 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:28:32 compute-0 nova_compute[189608]: 2025-11-24 22:28:32.388 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:28:32 compute-0 nova_compute[189608]: 2025-11-24 22:28:32.462 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:28:32 compute-0 nova_compute[189608]: 2025-11-24 22:28:32.463 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:28:32 compute-0 nova_compute[189608]: 2025-11-24 22:28:32.495 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:28:32 compute-0 nova_compute[189608]: 2025-11-24 22:28:32.513 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:28:32 compute-0 nova_compute[189608]: 2025-11-24 22:28:32.517 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:28:32 compute-0 nova_compute[189608]: 2025-11-24 22:28:32.517 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:28:33 compute-0 podman[251059]: 2025-11-24 22:28:33.572137209 +0000 UTC m=+0.119154834 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:28:33 compute-0 nova_compute[189608]: 2025-11-24 22:28:33.948 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:34 compute-0 nova_compute[189608]: 2025-11-24 22:28:34.513 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:28:34 compute-0 nova_compute[189608]: 2025-11-24 22:28:34.513 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:28:36 compute-0 nova_compute[189608]: 2025-11-24 22:28:36.470 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:36 compute-0 nova_compute[189608]: 2025-11-24 22:28:36.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:28:37 compute-0 nova_compute[189608]: 2025-11-24 22:28:37.798 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:28:38 compute-0 nova_compute[189608]: 2025-11-24 22:28:38.796 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:28:38 compute-0 nova_compute[189608]: 2025-11-24 22:28:38.951 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:39 compute-0 podman[251084]: 2025-11-24 22:28:39.553719401 +0000 UTC m=+0.106932783 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd)
Nov 24 22:28:40 compute-0 nova_compute[189608]: 2025-11-24 22:28:40.796 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:28:40 compute-0 ovn_controller[97889]: 2025-11-24T22:28:40Z|00065|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Nov 24 22:28:41 compute-0 nova_compute[189608]: 2025-11-24 22:28:41.477 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:42 compute-0 nova_compute[189608]: 2025-11-24 22:28:42.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:28:42 compute-0 nova_compute[189608]: 2025-11-24 22:28:42.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:28:42 compute-0 sshd-session[251104]: Invalid user sol from 45.148.10.240 port 60566
Nov 24 22:28:43 compute-0 sshd-session[251104]: Connection closed by invalid user sol 45.148.10.240 port 60566 [preauth]
Nov 24 22:28:43 compute-0 podman[251106]: 2025-11-24 22:28:43.093076244 +0000 UTC m=+0.104024253 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm)
Nov 24 22:28:43 compute-0 nova_compute[189608]: 2025-11-24 22:28:43.954 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:46 compute-0 nova_compute[189608]: 2025-11-24 22:28:46.484 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:48 compute-0 podman[251128]: 2025-11-24 22:28:48.589969785 +0000 UTC m=+0.127294256 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Nov 24 22:28:48 compute-0 podman[251129]: 2025-11-24 22:28:48.592719131 +0000 UTC m=+0.118568105 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 24 22:28:48 compute-0 podman[251127]: 2025-11-24 22:28:48.605166679 +0000 UTC m=+0.146406963 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, version=9.4, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 24 22:28:48 compute-0 nova_compute[189608]: 2025-11-24 22:28:48.957 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:51 compute-0 nova_compute[189608]: 2025-11-24 22:28:51.489 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:52 compute-0 podman[251180]: 2025-11-24 22:28:52.56821725 +0000 UTC m=+0.113677102 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:28:53 compute-0 nova_compute[189608]: 2025-11-24 22:28:53.962 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:54 compute-0 nova_compute[189608]: 2025-11-24 22:28:54.133 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:28:54.594 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:28:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:28:54.595 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:28:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:28:54.596 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:28:55 compute-0 podman[251206]: 2025-11-24 22:28:55.615540994 +0000 UTC m=+0.148297332 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 24 22:28:55 compute-0 podman[251205]: 2025-11-24 22:28:55.70271145 +0000 UTC m=+0.241417893 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 24 22:28:56 compute-0 nova_compute[189608]: 2025-11-24 22:28:56.493 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:57 compute-0 nova_compute[189608]: 2025-11-24 22:28:57.502 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:58 compute-0 nova_compute[189608]: 2025-11-24 22:28:58.553 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:58 compute-0 nova_compute[189608]: 2025-11-24 22:28:58.965 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:28:59 compute-0 podman[203795]: time="2025-11-24T22:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:28:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:28:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Nov 24 22:29:00 compute-0 nova_compute[189608]: 2025-11-24 22:29:00.988 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:01 compute-0 openstack_network_exporter[205945]: ERROR   22:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:29:01 compute-0 openstack_network_exporter[205945]: ERROR   22:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:29:01 compute-0 openstack_network_exporter[205945]: ERROR   22:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:29:01 compute-0 openstack_network_exporter[205945]: ERROR   22:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:29:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:29:01 compute-0 openstack_network_exporter[205945]: ERROR   22:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:29:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:29:01 compute-0 nova_compute[189608]: 2025-11-24 22:29:01.496 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:02 compute-0 nova_compute[189608]: 2025-11-24 22:29:02.299 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:02 compute-0 nova_compute[189608]: 2025-11-24 22:29:02.741 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:03 compute-0 nova_compute[189608]: 2025-11-24 22:29:03.914 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:03 compute-0 nova_compute[189608]: 2025-11-24 22:29:03.967 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:04 compute-0 podman[251249]: 2025-11-24 22:29:04.553588316 +0000 UTC m=+0.098608524 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:29:06 compute-0 nova_compute[189608]: 2025-11-24 22:29:06.501 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:07 compute-0 nova_compute[189608]: 2025-11-24 22:29:07.240 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:08 compute-0 nova_compute[189608]: 2025-11-24 22:29:08.970 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:10 compute-0 podman[251271]: 2025-11-24 22:29:10.524074374 +0000 UTC m=+0.085055022 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 22:29:10 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:10.597 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:29:10 compute-0 nova_compute[189608]: 2025-11-24 22:29:10.598 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:10 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:10.598 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:29:11 compute-0 nova_compute[189608]: 2025-11-24 22:29:11.505 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:11 compute-0 nova_compute[189608]: 2025-11-24 22:29:11.710 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquiring lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:11 compute-0 nova_compute[189608]: 2025-11-24 22:29:11.711 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:11 compute-0 nova_compute[189608]: 2025-11-24 22:29:11.736 189613 DEBUG nova.compute.manager [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:29:11 compute-0 nova_compute[189608]: 2025-11-24 22:29:11.848 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:11 compute-0 nova_compute[189608]: 2025-11-24 22:29:11.849 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:11 compute-0 nova_compute[189608]: 2025-11-24 22:29:11.863 189613 DEBUG nova.virt.hardware [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:29:11 compute-0 nova_compute[189608]: 2025-11-24 22:29:11.864 189613 INFO nova.compute.claims [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:29:11 compute-0 nova_compute[189608]: 2025-11-24 22:29:11.991 189613 DEBUG nova.compute.provider_tree [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.002 189613 DEBUG nova.scheduler.client.report [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.026 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.027 189613 DEBUG nova.compute.manager [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.075 189613 DEBUG nova.compute.manager [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.076 189613 DEBUG nova.network.neutron [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.095 189613 INFO nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.111 189613 DEBUG nova.compute.manager [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.196 189613 DEBUG nova.compute.manager [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.197 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.198 189613 INFO nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Creating image(s)
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.198 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquiring lock "/var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.199 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "/var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.200 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "/var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.200 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquiring lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.201 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:12 compute-0 nova_compute[189608]: 2025-11-24 22:29:12.988 189613 DEBUG nova.policy [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd2ee7a1723f8477f92f62974f0676bd8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5225a786b1b64fcbbd2af0a1b5082c92', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 22:29:13 compute-0 podman[251290]: 2025-11-24 22:29:13.549108426 +0000 UTC m=+0.105004833 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.license=GPLv2, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:29:13 compute-0 nova_compute[189608]: 2025-11-24 22:29:13.974 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:13 compute-0 nova_compute[189608]: 2025-11-24 22:29:13.992 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.070 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e.part --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.071 189613 DEBUG nova.virt.images [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] ec71d7d5-c197-4331-bf8d-e2de71a8419f was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.074 189613 DEBUG nova.privsep.utils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.075 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e.part /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.307 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e.part /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e.converted" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.313 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.405 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e.converted --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.407 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.422 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.497 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.499 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquiring lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.500 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.515 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.602 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.604 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.665 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk 1073741824" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.666 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.668 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.717 189613 DEBUG nova.network.neutron [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Successfully created port: e0390902-6ae3-485d-b497-f57a8cca001c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.730 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.732 189613 DEBUG nova.virt.disk.api [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Checking if we can resize image /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.732 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.813 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.814 189613 DEBUG nova.virt.disk.api [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Cannot resize image /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.815 189613 DEBUG nova.objects.instance [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lazy-loading 'migration_context' on Instance uuid 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.835 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.836 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Ensure instance console log exists: /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.836 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.837 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:14 compute-0 nova_compute[189608]: 2025-11-24 22:29:14.838 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.352 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Acquiring lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.353 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.375 189613 DEBUG nova.compute.manager [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.467 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.469 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.484 189613 DEBUG nova.virt.hardware [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.486 189613 INFO nova.compute.claims [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.559 189613 DEBUG nova.network.neutron [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Successfully updated port: e0390902-6ae3-485d-b497-f57a8cca001c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.584 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquiring lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.585 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquired lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.587 189613 DEBUG nova.network.neutron [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.659 189613 DEBUG nova.compute.provider_tree [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.674 189613 DEBUG nova.scheduler.client.report [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.698 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.699 189613 DEBUG nova.compute.manager [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.777 189613 DEBUG nova.compute.manager [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.778 189613 DEBUG nova.network.neutron [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.803 189613 INFO nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:29:15 compute-0 nova_compute[189608]: 2025-11-24 22:29:15.823 189613 DEBUG nova.compute.manager [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.137 189613 DEBUG nova.network.neutron [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.172 189613 DEBUG nova.compute.manager [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.175 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.176 189613 INFO nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Creating image(s)
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.178 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Acquiring lock "/var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.179 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "/var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.181 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "/var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.209 189613 DEBUG oslo_concurrency.processutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.270 189613 DEBUG oslo_concurrency.processutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.273 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Acquiring lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.275 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.299 189613 DEBUG oslo_concurrency.processutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.325 189613 DEBUG nova.policy [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '60bec8f905ce42c189140229b54eb832', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92c99dbee50c412a8684f6b46045b25e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.363 189613 DEBUG oslo_concurrency.processutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.364 189613 DEBUG oslo_concurrency.processutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.430 189613 DEBUG oslo_concurrency.processutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk 1073741824" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.432 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.433 189613 DEBUG oslo_concurrency.processutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.497 189613 DEBUG oslo_concurrency.processutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.499 189613 DEBUG nova.virt.disk.api [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Checking if we can resize image /var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.500 189613 DEBUG oslo_concurrency.processutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.518 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.556 189613 DEBUG nova.compute.manager [req-024d75c3-ab63-49e1-b8cb-576b5de47c21 req-b5a74bb1-508d-4612-9d23-91c0619a2e88 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Received event network-changed-e0390902-6ae3-485d-b497-f57a8cca001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.557 189613 DEBUG nova.compute.manager [req-024d75c3-ab63-49e1-b8cb-576b5de47c21 req-b5a74bb1-508d-4612-9d23-91c0619a2e88 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Refreshing instance network info cache due to event network-changed-e0390902-6ae3-485d-b497-f57a8cca001c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.557 189613 DEBUG oslo_concurrency.lockutils [req-024d75c3-ab63-49e1-b8cb-576b5de47c21 req-b5a74bb1-508d-4612-9d23-91c0619a2e88 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.560 189613 DEBUG oslo_concurrency.processutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.565 189613 DEBUG nova.virt.disk.api [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Cannot resize image /var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.565 189613 DEBUG nova.objects.instance [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lazy-loading 'migration_context' on Instance uuid 54138d77-c8e9-474f-8c09-e7f7cb37cbaa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.587 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.588 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Ensure instance console log exists: /var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.589 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.589 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:16 compute-0 nova_compute[189608]: 2025-11-24 22:29:16.590 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:17.601 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.702 189613 DEBUG nova.network.neutron [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updating instance_info_cache with network_info: [{"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.729 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Releasing lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.729 189613 DEBUG nova.compute.manager [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Instance network_info: |[{"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.731 189613 DEBUG oslo_concurrency.lockutils [req-024d75c3-ab63-49e1-b8cb-576b5de47c21 req-b5a74bb1-508d-4612-9d23-91c0619a2e88 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.732 189613 DEBUG nova.network.neutron [req-024d75c3-ab63-49e1-b8cb-576b5de47c21 req-b5a74bb1-508d-4612-9d23-91c0619a2e88 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Refreshing network info cache for port e0390902-6ae3-485d-b497-f57a8cca001c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.738 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Start _get_guest_xml network_info=[{"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.759 189613 WARNING nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.772 189613 DEBUG nova.virt.libvirt.host [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.773 189613 DEBUG nova.virt.libvirt.host [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.779 189613 DEBUG nova.virt.libvirt.host [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.779 189613 DEBUG nova.virt.libvirt.host [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.780 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.780 189613 DEBUG nova.virt.hardware [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:28:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a49f1e6c-1051-4dea-812e-0063121444a0',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.781 189613 DEBUG nova.virt.hardware [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.781 189613 DEBUG nova.virt.hardware [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.781 189613 DEBUG nova.virt.hardware [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.781 189613 DEBUG nova.virt.hardware [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.782 189613 DEBUG nova.virt.hardware [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.782 189613 DEBUG nova.virt.hardware [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.782 189613 DEBUG nova.virt.hardware [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.782 189613 DEBUG nova.virt.hardware [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.783 189613 DEBUG nova.virt.hardware [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.783 189613 DEBUG nova.virt.hardware [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.788 189613 DEBUG nova.virt.libvirt.vif [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:29:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-476448184',display_name='tempest-AttachInterfacesUnderV243Test-server-476448184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-476448184',id=6,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBModyKpHo+bUle696Y53IH9hCC0Nmu0qTbd1dYeKChZbasisMOqUyA99gseDuuBNddhIrU0ChnVA7KG8QVFP+O3BAeUOzrsyIrYuwW2ipaQtlPgdQM4pYzzTX/M2GYBy6Q==',key_name='tempest-keypair-1779492647',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5225a786b1b64fcbbd2af0a1b5082c92',ramdisk_id='',reservation_id='r-lj5trgax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-929420375',owner_user_name='tempest-AttachInterfacesUnderV243Test-929420375-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:29:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d2ee7a1723f8477f92f62974f0676bd8',uuid=8b851edf-b3aa-4ca0-a142-8dd0d0e6270a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.789 189613 DEBUG nova.network.os_vif_util [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Converting VIF {"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.790 189613 DEBUG nova.network.os_vif_util [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:e8:3d,bridge_name='br-int',has_traffic_filtering=True,id=e0390902-6ae3-485d-b497-f57a8cca001c,network=Network(c09fe20f-09f5-4457-8c18-2dd55de423b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0390902-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.791 189613 DEBUG nova.objects.instance [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.809 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:29:17 compute-0 nova_compute[189608]:   <uuid>8b851edf-b3aa-4ca0-a142-8dd0d0e6270a</uuid>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   <name>instance-00000006</name>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   <memory>131072</memory>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-476448184</nova:name>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:29:17</nova:creationTime>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <nova:flavor name="m1.nano">
Nov 24 22:29:17 compute-0 nova_compute[189608]:         <nova:memory>128</nova:memory>
Nov 24 22:29:17 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:29:17 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:29:17 compute-0 nova_compute[189608]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 22:29:17 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:29:17 compute-0 nova_compute[189608]:         <nova:user uuid="d2ee7a1723f8477f92f62974f0676bd8">tempest-AttachInterfacesUnderV243Test-929420375-project-member</nova:user>
Nov 24 22:29:17 compute-0 nova_compute[189608]:         <nova:project uuid="5225a786b1b64fcbbd2af0a1b5082c92">tempest-AttachInterfacesUnderV243Test-929420375</nova:project>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="ec71d7d5-c197-4331-bf8d-e2de71a8419f"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:29:17 compute-0 nova_compute[189608]:         <nova:port uuid="e0390902-6ae3-485d-b497-f57a8cca001c">
Nov 24 22:29:17 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <system>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <entry name="serial">8b851edf-b3aa-4ca0-a142-8dd0d0e6270a</entry>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <entry name="uuid">8b851edf-b3aa-4ca0-a142-8dd0d0e6270a</entry>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     </system>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   <os>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   </os>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   <features>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   </features>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.config"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:84:e8:3d"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <target dev="tape0390902-6a"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/console.log" append="off"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <video>
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     </video>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:29:17 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:29:17 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:29:17 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:29:17 compute-0 nova_compute[189608]: </domain>
Nov 24 22:29:17 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.810 189613 DEBUG nova.compute.manager [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Preparing to wait for external event network-vif-plugged-e0390902-6ae3-485d-b497-f57a8cca001c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.811 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquiring lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.812 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.813 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.814 189613 DEBUG nova.virt.libvirt.vif [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:29:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-476448184',display_name='tempest-AttachInterfacesUnderV243Test-server-476448184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-476448184',id=6,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBModyKpHo+bUle696Y53IH9hCC0Nmu0qTbd1dYeKChZbasisMOqUyA99gseDuuBNddhIrU0ChnVA7KG8QVFP+O3BAeUOzrsyIrYuwW2ipaQtlPgdQM4pYzzTX/M2GYBy6Q==',key_name='tempest-keypair-1779492647',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5225a786b1b64fcbbd2af0a1b5082c92',ramdisk_id='',reservation_id='r-lj5trgax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-929420375',owner_user_name='tempest-AttachInterfacesUnderV243Test-929420375-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:29:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d2ee7a1723f8477f92f62974f0676bd8',uuid=8b851edf-b3aa-4ca0-a142-8dd0d0e6270a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.815 189613 DEBUG nova.network.os_vif_util [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Converting VIF {"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.816 189613 DEBUG nova.network.os_vif_util [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:e8:3d,bridge_name='br-int',has_traffic_filtering=True,id=e0390902-6ae3-485d-b497-f57a8cca001c,network=Network(c09fe20f-09f5-4457-8c18-2dd55de423b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0390902-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.817 189613 DEBUG os_vif [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:e8:3d,bridge_name='br-int',has_traffic_filtering=True,id=e0390902-6ae3-485d-b497-f57a8cca001c,network=Network(c09fe20f-09f5-4457-8c18-2dd55de423b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0390902-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.818 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.819 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.820 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.828 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.829 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0390902-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.830 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0390902-6a, col_values=(('external_ids', {'iface-id': 'e0390902-6ae3-485d-b497-f57a8cca001c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:e8:3d', 'vm-uuid': '8b851edf-b3aa-4ca0-a142-8dd0d0e6270a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.832 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:17 compute-0 NetworkManager[56413]: <info>  [1764023357.8339] manager: (tape0390902-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.836 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.846 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.848 189613 INFO os_vif [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:e8:3d,bridge_name='br-int',has_traffic_filtering=True,id=e0390902-6ae3-485d-b497-f57a8cca001c,network=Network(c09fe20f-09f5-4457-8c18-2dd55de423b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0390902-6a')
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.907 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.908 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.910 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] No VIF found with MAC fa:16:3e:84:e8:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:29:17 compute-0 nova_compute[189608]: 2025-11-24 22:29:17.911 189613 INFO nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Using config drive
Nov 24 22:29:18 compute-0 nova_compute[189608]: 2025-11-24 22:29:18.257 189613 DEBUG nova.network.neutron [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Successfully created port: 50a96df6-c0e9-444e-a10a-8d2747085ccf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 22:29:18 compute-0 nova_compute[189608]: 2025-11-24 22:29:18.384 189613 INFO nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Creating config drive at /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.config
Nov 24 22:29:18 compute-0 nova_compute[189608]: 2025-11-24 22:29:18.396 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb_i0x20y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:18 compute-0 nova_compute[189608]: 2025-11-24 22:29:18.530 189613 DEBUG oslo_concurrency.processutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb_i0x20y" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:18 compute-0 NetworkManager[56413]: <info>  [1764023358.6581] manager: (tape0390902-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Nov 24 22:29:18 compute-0 kernel: tape0390902-6a: entered promiscuous mode
Nov 24 22:29:18 compute-0 ovn_controller[97889]: 2025-11-24T22:29:18Z|00066|binding|INFO|Claiming lport e0390902-6ae3-485d-b497-f57a8cca001c for this chassis.
Nov 24 22:29:18 compute-0 ovn_controller[97889]: 2025-11-24T22:29:18Z|00067|binding|INFO|e0390902-6ae3-485d-b497-f57a8cca001c: Claiming fa:16:3e:84:e8:3d 10.100.0.13
Nov 24 22:29:18 compute-0 nova_compute[189608]: 2025-11-24 22:29:18.668 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.678 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:e8:3d 10.100.0.13'], port_security=['fa:16:3e:84:e8:3d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8b851edf-b3aa-4ca0-a142-8dd0d0e6270a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c09fe20f-09f5-4457-8c18-2dd55de423b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5225a786b1b64fcbbd2af0a1b5082c92', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f2367eaf-5847-48eb-a9d7-e37430a35fff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e56029b-3de0-40b6-9ab5-3053975c41b2, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=e0390902-6ae3-485d-b497-f57a8cca001c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.681 106776 INFO neutron.agent.ovn.metadata.agent [-] Port e0390902-6ae3-485d-b497-f57a8cca001c in datapath c09fe20f-09f5-4457-8c18-2dd55de423b7 bound to our chassis
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.684 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c09fe20f-09f5-4457-8c18-2dd55de423b7
Nov 24 22:29:18 compute-0 ovn_controller[97889]: 2025-11-24T22:29:18Z|00068|binding|INFO|Setting lport e0390902-6ae3-485d-b497-f57a8cca001c ovn-installed in OVS
Nov 24 22:29:18 compute-0 ovn_controller[97889]: 2025-11-24T22:29:18Z|00069|binding|INFO|Setting lport e0390902-6ae3-485d-b497-f57a8cca001c up in Southbound
Nov 24 22:29:18 compute-0 nova_compute[189608]: 2025-11-24 22:29:18.701 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:18 compute-0 nova_compute[189608]: 2025-11-24 22:29:18.703 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:18 compute-0 systemd-udevd[251396]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:29:18 compute-0 nova_compute[189608]: 2025-11-24 22:29:18.711 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.709 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7b94b7-bdac-4e50-8383-99d2bd5747ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.714 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc09fe20f-01 in ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.716 240020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc09fe20f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.717 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[fddc8103-85a5-4465-94a0-7b7c6cb6a2f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.718 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[9f78b66d-4746-47f8-a20d-05b86e23ac1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:18 compute-0 systemd-machined[155884]: New machine qemu-6-instance-00000006.
Nov 24 22:29:18 compute-0 NetworkManager[56413]: <info>  [1764023358.7346] device (tape0390902-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:29:18 compute-0 NetworkManager[56413]: <info>  [1764023358.7356] device (tape0390902-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:29:18 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.743 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[49b0a96c-bd11-43c2-85e1-db2d33c58026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.768 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[03982126-7c6c-4496-b0c8-27cd4c82159c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:18 compute-0 podman[251367]: 2025-11-24 22:29:18.775189152 +0000 UTC m=+0.125347135 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:29:18 compute-0 podman[251365]: 2025-11-24 22:29:18.794105561 +0000 UTC m=+0.147954339 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 24 22:29:18 compute-0 podman[251369]: 2025-11-24 22:29:18.812254404 +0000 UTC m=+0.141678702 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, architecture=x86_64, name=ubi9, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-type=git, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.822 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[e666cb59-4eae-4956-af78-b1a44d473574]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.830 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[028d9557-ff68-4e35-9a9e-0735cc00e808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:18 compute-0 NetworkManager[56413]: <info>  [1764023358.8322] manager: (tapc09fe20f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.871 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[f21f3ff5-e863-468e-9581-c96b844fa42d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.875 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[70d6279e-f272-42e4-ab24-086f076402d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:18 compute-0 NetworkManager[56413]: <info>  [1764023358.9075] device (tapc09fe20f-00): carrier: link connected
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.913 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[f6391cf6-4d66-4e33-a277-827cf142a47a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.939 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[0008a2a4-b3e7-485a-a9cc-d51641797636]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc09fe20f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:2c:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520639, 'reachable_time': 27482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251459, 'error': None, 'target': 'ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.958 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[c730b189-a230-47cf-afef-5947387a7bd7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:2c3a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520639, 'tstamp': 520639}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251460, 'error': None, 'target': 'ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:18.978 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[0317e9d8-5909-459f-a288-817e6bf030cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc09fe20f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:2c:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520639, 'reachable_time': 27482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251461, 'error': None, 'target': 'ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:18 compute-0 nova_compute[189608]: 2025-11-24 22:29:18.976 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:19.014 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[2628cd6a-efe0-47ed-86a4-3d20b1defbe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:19.086 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[75eb2fad-e50e-4c23-b851-863391cb809c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:19.088 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc09fe20f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:19.089 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:19.089 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc09fe20f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:19 compute-0 kernel: tapc09fe20f-00: entered promiscuous mode
Nov 24 22:29:19 compute-0 nova_compute[189608]: 2025-11-24 22:29:19.091 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:19 compute-0 NetworkManager[56413]: <info>  [1764023359.0937] manager: (tapc09fe20f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 24 22:29:19 compute-0 nova_compute[189608]: 2025-11-24 22:29:19.093 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:19.094 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc09fe20f-00, col_values=(('external_ids', {'iface-id': '467754a3-548e-4841-9628-d4a6a4daa2bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:19 compute-0 ovn_controller[97889]: 2025-11-24T22:29:19Z|00070|binding|INFO|Releasing lport 467754a3-548e-4841-9628-d4a6a4daa2bb from this chassis (sb_readonly=0)
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:19.107 106776 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c09fe20f-09f5-4457-8c18-2dd55de423b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c09fe20f-09f5-4457-8c18-2dd55de423b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 22:29:19 compute-0 nova_compute[189608]: 2025-11-24 22:29:19.107 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:19.108 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e28a4d-21c4-47d8-aef0-9649e7c7c3f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:19.109 106776 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: global
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     log         /dev/log local0 debug
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     log-tag     haproxy-metadata-proxy-c09fe20f-09f5-4457-8c18-2dd55de423b7
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     user        root
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     group       root
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     maxconn     1024
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     pidfile     /var/lib/neutron/external/pids/c09fe20f-09f5-4457-8c18-2dd55de423b7.pid.haproxy
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     daemon
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: defaults
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     log global
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     mode http
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     option httplog
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     option dontlognull
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     option http-server-close
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     option forwardfor
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     retries                 3
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     timeout http-request    30s
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     timeout connect         30s
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     timeout client          32s
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     timeout server          32s
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     timeout http-keep-alive 30s
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: listen listener
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     bind 169.254.169.254:80
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:     http-request add-header X-OVN-Network-ID c09fe20f-09f5-4457-8c18-2dd55de423b7
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 22:29:19 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:19.110 106776 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7', 'env', 'PROCESS_TAG=haproxy-c09fe20f-09f5-4457-8c18-2dd55de423b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c09fe20f-09f5-4457-8c18-2dd55de423b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 22:29:19 compute-0 nova_compute[189608]: 2025-11-24 22:29:19.157 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023359.155758, 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:29:19 compute-0 nova_compute[189608]: 2025-11-24 22:29:19.158 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] VM Started (Lifecycle Event)
Nov 24 22:29:19 compute-0 nova_compute[189608]: 2025-11-24 22:29:19.181 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:29:19 compute-0 nova_compute[189608]: 2025-11-24 22:29:19.192 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023359.1559496, 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:29:19 compute-0 nova_compute[189608]: 2025-11-24 22:29:19.193 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] VM Paused (Lifecycle Event)
Nov 24 22:29:19 compute-0 nova_compute[189608]: 2025-11-24 22:29:19.221 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:29:19 compute-0 nova_compute[189608]: 2025-11-24 22:29:19.228 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:29:19 compute-0 nova_compute[189608]: 2025-11-24 22:29:19.255 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:29:19 compute-0 podman[251499]: 2025-11-24 22:29:19.58000054 +0000 UTC m=+0.091876625 container create faf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 24 22:29:19 compute-0 podman[251499]: 2025-11-24 22:29:19.531781202 +0000 UTC m=+0.043657357 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 22:29:19 compute-0 systemd[1]: Started libpod-conmon-faf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d.scope.
Nov 24 22:29:19 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 24 22:29:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 22:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1ed220580f7dfe34bd3547337bff8714309264e691441f2058eea1b0805f7d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 22:29:19 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 24 22:29:19 compute-0 podman[251499]: 2025-11-24 22:29:19.706320085 +0000 UTC m=+0.218196240 container init faf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:29:19 compute-0 podman[251499]: 2025-11-24 22:29:19.719422872 +0000 UTC m=+0.231298987 container start faf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:29:19 compute-0 neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7[251515]: [NOTICE]   (251538) : New worker (251540) forked
Nov 24 22:29:19 compute-0 neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7[251515]: [NOTICE]   (251538) : Loading success.
Nov 24 22:29:20 compute-0 nova_compute[189608]: 2025-11-24 22:29:20.017 189613 DEBUG nova.network.neutron [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Successfully updated port: 50a96df6-c0e9-444e-a10a-8d2747085ccf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:29:20 compute-0 nova_compute[189608]: 2025-11-24 22:29:20.041 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Acquiring lock "refresh_cache-54138d77-c8e9-474f-8c09-e7f7cb37cbaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:29:20 compute-0 nova_compute[189608]: 2025-11-24 22:29:20.042 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Acquired lock "refresh_cache-54138d77-c8e9-474f-8c09-e7f7cb37cbaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:29:20 compute-0 nova_compute[189608]: 2025-11-24 22:29:20.043 189613 DEBUG nova.network.neutron [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:29:20 compute-0 nova_compute[189608]: 2025-11-24 22:29:20.095 189613 DEBUG nova.network.neutron [req-024d75c3-ab63-49e1-b8cb-576b5de47c21 req-b5a74bb1-508d-4612-9d23-91c0619a2e88 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updated VIF entry in instance network info cache for port e0390902-6ae3-485d-b497-f57a8cca001c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:29:20 compute-0 nova_compute[189608]: 2025-11-24 22:29:20.096 189613 DEBUG nova.network.neutron [req-024d75c3-ab63-49e1-b8cb-576b5de47c21 req-b5a74bb1-508d-4612-9d23-91c0619a2e88 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updating instance_info_cache with network_info: [{"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:29:20 compute-0 nova_compute[189608]: 2025-11-24 22:29:20.123 189613 DEBUG oslo_concurrency.lockutils [req-024d75c3-ab63-49e1-b8cb-576b5de47c21 req-b5a74bb1-508d-4612-9d23-91c0619a2e88 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:29:20 compute-0 nova_compute[189608]: 2025-11-24 22:29:20.165 189613 DEBUG nova.compute.manager [req-cd79854e-f171-4cd0-a4ab-c83c989e72b4 req-f6f0fb3a-c07f-4c56-aa35-2bc4fd864bf3 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Received event network-changed-50a96df6-c0e9-444e-a10a-8d2747085ccf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:29:20 compute-0 nova_compute[189608]: 2025-11-24 22:29:20.166 189613 DEBUG nova.compute.manager [req-cd79854e-f171-4cd0-a4ab-c83c989e72b4 req-f6f0fb3a-c07f-4c56-aa35-2bc4fd864bf3 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Refreshing instance network info cache due to event network-changed-50a96df6-c0e9-444e-a10a-8d2747085ccf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:29:20 compute-0 nova_compute[189608]: 2025-11-24 22:29:20.167 189613 DEBUG oslo_concurrency.lockutils [req-cd79854e-f171-4cd0-a4ab-c83c989e72b4 req-f6f0fb3a-c07f-4c56-aa35-2bc4fd864bf3 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-54138d77-c8e9-474f-8c09-e7f7cb37cbaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:29:20 compute-0 nova_compute[189608]: 2025-11-24 22:29:20.312 189613 DEBUG nova.network.neutron [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.162 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.414 189613 DEBUG nova.network.neutron [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Updating instance_info_cache with network_info: [{"id": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "address": "fa:16:3e:09:f0:5f", "network": {"id": "2115acc4-d8c6-496a-ad70-3b4d97101ed5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-312324450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92c99dbee50c412a8684f6b46045b25e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a96df6-c0", "ovs_interfaceid": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.442 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Releasing lock "refresh_cache-54138d77-c8e9-474f-8c09-e7f7cb37cbaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.444 189613 DEBUG nova.compute.manager [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Instance network_info: |[{"id": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "address": "fa:16:3e:09:f0:5f", "network": {"id": "2115acc4-d8c6-496a-ad70-3b4d97101ed5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-312324450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92c99dbee50c412a8684f6b46045b25e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a96df6-c0", "ovs_interfaceid": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.445 189613 DEBUG oslo_concurrency.lockutils [req-cd79854e-f171-4cd0-a4ab-c83c989e72b4 req-f6f0fb3a-c07f-4c56-aa35-2bc4fd864bf3 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-54138d77-c8e9-474f-8c09-e7f7cb37cbaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.446 189613 DEBUG nova.network.neutron [req-cd79854e-f171-4cd0-a4ab-c83c989e72b4 req-f6f0fb3a-c07f-4c56-aa35-2bc4fd864bf3 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Refreshing network info cache for port 50a96df6-c0e9-444e-a10a-8d2747085ccf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.451 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Start _get_guest_xml network_info=[{"id": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "address": "fa:16:3e:09:f0:5f", "network": {"id": "2115acc4-d8c6-496a-ad70-3b4d97101ed5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-312324450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92c99dbee50c412a8684f6b46045b25e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a96df6-c0", "ovs_interfaceid": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.474 189613 WARNING nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.485 189613 DEBUG nova.virt.libvirt.host [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.486 189613 DEBUG nova.virt.libvirt.host [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.491 189613 DEBUG nova.virt.libvirt.host [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.492 189613 DEBUG nova.virt.libvirt.host [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.492 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.493 189613 DEBUG nova.virt.hardware [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:28:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a49f1e6c-1051-4dea-812e-0063121444a0',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.493 189613 DEBUG nova.virt.hardware [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.493 189613 DEBUG nova.virt.hardware [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.494 189613 DEBUG nova.virt.hardware [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.494 189613 DEBUG nova.virt.hardware [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.494 189613 DEBUG nova.virt.hardware [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.495 189613 DEBUG nova.virt.hardware [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.495 189613 DEBUG nova.virt.hardware [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.495 189613 DEBUG nova.virt.hardware [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.496 189613 DEBUG nova.virt.hardware [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.496 189613 DEBUG nova.virt.hardware [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.500 189613 DEBUG nova.virt.libvirt.vif [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:29:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1330136782',display_name='tempest-ServersTestManualDisk-server-1330136782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1330136782',id=7,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE2rR8EJVS2XFZhYBVxQE65O/XakZyGehGDzzcXRQvdzCvURgilOUpVaeNDlQDHNg2ecl0skHSbQrOHuYiOdGVrqUjR/pTiJGpcmn7MeRhN4XP1fh58EmZy6I49MehGoow==',key_name='tempest-keypair-962835102',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92c99dbee50c412a8684f6b46045b25e',ramdisk_id='',reservation_id='r-jk3lqaxp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-933824192',owner_user_name='tempest-ServersTestManualDisk-933824192-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:29:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='60bec8f905ce42c189140229b54eb832',uuid=54138d77-c8e9-474f-8c09-e7f7cb37cbaa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "address": "fa:16:3e:09:f0:5f", "network": {"id": "2115acc4-d8c6-496a-ad70-3b4d97101ed5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-312324450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92c99dbee50c412a8684f6b46045b25e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a96df6-c0", "ovs_interfaceid": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.501 189613 DEBUG nova.network.os_vif_util [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Converting VIF {"id": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "address": "fa:16:3e:09:f0:5f", "network": {"id": "2115acc4-d8c6-496a-ad70-3b4d97101ed5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-312324450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92c99dbee50c412a8684f6b46045b25e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a96df6-c0", "ovs_interfaceid": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.502 189613 DEBUG nova.network.os_vif_util [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:f0:5f,bridge_name='br-int',has_traffic_filtering=True,id=50a96df6-c0e9-444e-a10a-8d2747085ccf,network=Network(2115acc4-d8c6-496a-ad70-3b4d97101ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a96df6-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.503 189613 DEBUG nova.objects.instance [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lazy-loading 'pci_devices' on Instance uuid 54138d77-c8e9-474f-8c09-e7f7cb37cbaa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.522 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:29:22 compute-0 nova_compute[189608]:   <uuid>54138d77-c8e9-474f-8c09-e7f7cb37cbaa</uuid>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   <name>instance-00000007</name>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   <memory>131072</memory>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <nova:name>tempest-ServersTestManualDisk-server-1330136782</nova:name>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:29:22</nova:creationTime>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <nova:flavor name="m1.nano">
Nov 24 22:29:22 compute-0 nova_compute[189608]:         <nova:memory>128</nova:memory>
Nov 24 22:29:22 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:29:22 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:29:22 compute-0 nova_compute[189608]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 22:29:22 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:29:22 compute-0 nova_compute[189608]:         <nova:user uuid="60bec8f905ce42c189140229b54eb832">tempest-ServersTestManualDisk-933824192-project-member</nova:user>
Nov 24 22:29:22 compute-0 nova_compute[189608]:         <nova:project uuid="92c99dbee50c412a8684f6b46045b25e">tempest-ServersTestManualDisk-933824192</nova:project>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="ec71d7d5-c197-4331-bf8d-e2de71a8419f"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:29:22 compute-0 nova_compute[189608]:         <nova:port uuid="50a96df6-c0e9-444e-a10a-8d2747085ccf">
Nov 24 22:29:22 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <system>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <entry name="serial">54138d77-c8e9-474f-8c09-e7f7cb37cbaa</entry>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <entry name="uuid">54138d77-c8e9-474f-8c09-e7f7cb37cbaa</entry>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     </system>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   <os>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   </os>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   <features>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   </features>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk.config"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:09:f0:5f"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <target dev="tap50a96df6-c0"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/console.log" append="off"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <video>
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     </video>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:29:22 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:29:22 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:29:22 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:29:22 compute-0 nova_compute[189608]: </domain>
Nov 24 22:29:22 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.523 189613 DEBUG nova.compute.manager [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Preparing to wait for external event network-vif-plugged-50a96df6-c0e9-444e-a10a-8d2747085ccf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.523 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Acquiring lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.524 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.524 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.525 189613 DEBUG nova.virt.libvirt.vif [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:29:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1330136782',display_name='tempest-ServersTestManualDisk-server-1330136782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1330136782',id=7,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE2rR8EJVS2XFZhYBVxQE65O/XakZyGehGDzzcXRQvdzCvURgilOUpVaeNDlQDHNg2ecl0skHSbQrOHuYiOdGVrqUjR/pTiJGpcmn7MeRhN4XP1fh58EmZy6I49MehGoow==',key_name='tempest-keypair-962835102',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92c99dbee50c412a8684f6b46045b25e',ramdisk_id='',reservation_id='r-jk3lqaxp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-933824192',owner_user_name='tempest-ServersTestManualDisk-933824192-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:29:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='60bec8f905ce42c189140229b54eb832',uuid=54138d77-c8e9-474f-8c09-e7f7cb37cbaa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "address": "fa:16:3e:09:f0:5f", "network": {"id": "2115acc4-d8c6-496a-ad70-3b4d97101ed5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-312324450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92c99dbee50c412a8684f6b46045b25e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a96df6-c0", "ovs_interfaceid": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.525 189613 DEBUG nova.network.os_vif_util [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Converting VIF {"id": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "address": "fa:16:3e:09:f0:5f", "network": {"id": "2115acc4-d8c6-496a-ad70-3b4d97101ed5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-312324450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92c99dbee50c412a8684f6b46045b25e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a96df6-c0", "ovs_interfaceid": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.526 189613 DEBUG nova.network.os_vif_util [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:f0:5f,bridge_name='br-int',has_traffic_filtering=True,id=50a96df6-c0e9-444e-a10a-8d2747085ccf,network=Network(2115acc4-d8c6-496a-ad70-3b4d97101ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a96df6-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.526 189613 DEBUG os_vif [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:f0:5f,bridge_name='br-int',has_traffic_filtering=True,id=50a96df6-c0e9-444e-a10a-8d2747085ccf,network=Network(2115acc4-d8c6-496a-ad70-3b4d97101ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a96df6-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.527 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.528 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.528 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.533 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.534 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50a96df6-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.534 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50a96df6-c0, col_values=(('external_ids', {'iface-id': '50a96df6-c0e9-444e-a10a-8d2747085ccf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:f0:5f', 'vm-uuid': '54138d77-c8e9-474f-8c09-e7f7cb37cbaa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.536 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:22 compute-0 NetworkManager[56413]: <info>  [1764023362.5378] manager: (tap50a96df6-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.538 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.553 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.554 189613 INFO os_vif [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:f0:5f,bridge_name='br-int',has_traffic_filtering=True,id=50a96df6-c0e9-444e-a10a-8d2747085ccf,network=Network(2115acc4-d8c6-496a-ad70-3b4d97101ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a96df6-c0')
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.649 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.650 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.650 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] No VIF found with MAC fa:16:3e:09:f0:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.652 189613 INFO nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Using config drive
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.781 189613 DEBUG nova.compute.manager [req-d8a8c8f5-681a-4312-a245-7667f9a8c310 req-a49f4858-8ce2-40d2-b12f-db95d1a4acbb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Received event network-vif-plugged-e0390902-6ae3-485d-b497-f57a8cca001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.781 189613 DEBUG oslo_concurrency.lockutils [req-d8a8c8f5-681a-4312-a245-7667f9a8c310 req-a49f4858-8ce2-40d2-b12f-db95d1a4acbb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.782 189613 DEBUG oslo_concurrency.lockutils [req-d8a8c8f5-681a-4312-a245-7667f9a8c310 req-a49f4858-8ce2-40d2-b12f-db95d1a4acbb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.783 189613 DEBUG oslo_concurrency.lockutils [req-d8a8c8f5-681a-4312-a245-7667f9a8c310 req-a49f4858-8ce2-40d2-b12f-db95d1a4acbb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.783 189613 DEBUG nova.compute.manager [req-d8a8c8f5-681a-4312-a245-7667f9a8c310 req-a49f4858-8ce2-40d2-b12f-db95d1a4acbb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Processing event network-vif-plugged-e0390902-6ae3-485d-b497-f57a8cca001c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.784 189613 DEBUG nova.compute.manager [req-d8a8c8f5-681a-4312-a245-7667f9a8c310 req-a49f4858-8ce2-40d2-b12f-db95d1a4acbb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Received event network-vif-plugged-e0390902-6ae3-485d-b497-f57a8cca001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.784 189613 DEBUG oslo_concurrency.lockutils [req-d8a8c8f5-681a-4312-a245-7667f9a8c310 req-a49f4858-8ce2-40d2-b12f-db95d1a4acbb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.785 189613 DEBUG oslo_concurrency.lockutils [req-d8a8c8f5-681a-4312-a245-7667f9a8c310 req-a49f4858-8ce2-40d2-b12f-db95d1a4acbb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.785 189613 DEBUG oslo_concurrency.lockutils [req-d8a8c8f5-681a-4312-a245-7667f9a8c310 req-a49f4858-8ce2-40d2-b12f-db95d1a4acbb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.785 189613 DEBUG nova.compute.manager [req-d8a8c8f5-681a-4312-a245-7667f9a8c310 req-a49f4858-8ce2-40d2-b12f-db95d1a4acbb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] No waiting events found dispatching network-vif-plugged-e0390902-6ae3-485d-b497-f57a8cca001c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.786 189613 WARNING nova.compute.manager [req-d8a8c8f5-681a-4312-a245-7667f9a8c310 req-a49f4858-8ce2-40d2-b12f-db95d1a4acbb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Received unexpected event network-vif-plugged-e0390902-6ae3-485d-b497-f57a8cca001c for instance with vm_state building and task_state spawning.
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.787 189613 DEBUG nova.compute.manager [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.795 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023362.794164, 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.795 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] VM Resumed (Lifecycle Event)
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.800 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.811 189613 INFO nova.virt.libvirt.driver [-] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Instance spawned successfully.
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.811 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.828 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.843 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.846 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.847 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.848 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.848 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.848 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.849 189613 DEBUG nova.virt.libvirt.driver [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.890 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.932 189613 INFO nova.compute.manager [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Took 10.74 seconds to spawn the instance on the hypervisor.
Nov 24 22:29:22 compute-0 nova_compute[189608]: 2025-11-24 22:29:22.933 189613 DEBUG nova.compute.manager [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:29:23 compute-0 nova_compute[189608]: 2025-11-24 22:29:23.021 189613 INFO nova.compute.manager [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Took 11.21 seconds to build instance.
Nov 24 22:29:23 compute-0 nova_compute[189608]: 2025-11-24 22:29:23.041 189613 DEBUG oslo_concurrency.lockutils [None req-831b2c24-65a0-4b5b-bdf0-aa9fa3cc5b07 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:23 compute-0 nova_compute[189608]: 2025-11-24 22:29:23.169 189613 INFO nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Creating config drive at /var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk.config
Nov 24 22:29:23 compute-0 nova_compute[189608]: 2025-11-24 22:29:23.178 189613 DEBUG oslo_concurrency.processutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8axaga_5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:23 compute-0 nova_compute[189608]: 2025-11-24 22:29:23.316 189613 DEBUG oslo_concurrency.processutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8axaga_5" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:23 compute-0 kernel: tap50a96df6-c0: entered promiscuous mode
Nov 24 22:29:23 compute-0 NetworkManager[56413]: <info>  [1764023363.4475] manager: (tap50a96df6-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Nov 24 22:29:23 compute-0 ovn_controller[97889]: 2025-11-24T22:29:23Z|00071|binding|INFO|Claiming lport 50a96df6-c0e9-444e-a10a-8d2747085ccf for this chassis.
Nov 24 22:29:23 compute-0 nova_compute[189608]: 2025-11-24 22:29:23.455 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:23 compute-0 ovn_controller[97889]: 2025-11-24T22:29:23Z|00072|binding|INFO|50a96df6-c0e9-444e-a10a-8d2747085ccf: Claiming fa:16:3e:09:f0:5f 10.100.0.4
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.463 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:f0:5f 10.100.0.4'], port_security=['fa:16:3e:09:f0:5f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '54138d77-c8e9-474f-8c09-e7f7cb37cbaa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2115acc4-d8c6-496a-ad70-3b4d97101ed5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92c99dbee50c412a8684f6b46045b25e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a161dd63-5db1-4976-8ffe-fc92903e38a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c619122e-78c3-4cd7-8428-c42da5043e73, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=50a96df6-c0e9-444e-a10a-8d2747085ccf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.466 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 50a96df6-c0e9-444e-a10a-8d2747085ccf in datapath 2115acc4-d8c6-496a-ad70-3b4d97101ed5 bound to our chassis
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.469 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2115acc4-d8c6-496a-ad70-3b4d97101ed5
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.487 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc3536b-c2b4-431e-be71-0838a1141f16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.488 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2115acc4-d1 in ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.491 240020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2115acc4-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.491 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[05e9f2ad-7c11-458e-a04c-f0c648c65eb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 ovn_controller[97889]: 2025-11-24T22:29:23Z|00073|binding|INFO|Setting lport 50a96df6-c0e9-444e-a10a-8d2747085ccf ovn-installed in OVS
Nov 24 22:29:23 compute-0 ovn_controller[97889]: 2025-11-24T22:29:23Z|00074|binding|INFO|Setting lport 50a96df6-c0e9-444e-a10a-8d2747085ccf up in Southbound
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.493 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[ebed16d9-e1d6-4535-bab8-9616fb2bfd75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 nova_compute[189608]: 2025-11-24 22:29:23.494 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:23 compute-0 nova_compute[189608]: 2025-11-24 22:29:23.499 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:23 compute-0 systemd-udevd[251579]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.506 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[8fee04dd-7515-41f4-8774-784f50d8083c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 NetworkManager[56413]: <info>  [1764023363.5229] device (tap50a96df6-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:29:23 compute-0 NetworkManager[56413]: <info>  [1764023363.5270] device (tap50a96df6-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:29:23 compute-0 systemd-machined[155884]: New machine qemu-7-instance-00000007.
Nov 24 22:29:23 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.542 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[cdbfb1b3-d5cb-4d25-9a14-2f15eb2e53bb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 podman[251562]: 2025-11-24 22:29:23.582244898 +0000 UTC m=+0.155016697 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.587 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[d89cec4f-bc14-4708-b11c-fee6e693fc50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 NetworkManager[56413]: <info>  [1764023363.6066] manager: (tap2115acc4-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.604 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[12c1371c-55e2-4efc-8d07-557c088d0b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.651 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[08b6937d-f908-4310-9841-b890f1aad782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.655 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[6792aad6-826c-4be9-aa80-54fb4a2e3d47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 NetworkManager[56413]: <info>  [1764023363.6858] device (tap2115acc4-d0): carrier: link connected
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.693 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[14fab4dd-6c11-44b7-a9c6-83ec81b7c963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.719 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[05d2fb4b-6389-45a6-aed1-a3f5b3e606dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2115acc4-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:ea:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521117, 'reachable_time': 30893, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251627, 'error': None, 'target': 'ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.745 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[88db12d7-b05d-4358-b002-8ba3a92fe7ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe89:eaf5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 521117, 'tstamp': 521117}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251629, 'error': None, 'target': 'ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.769 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[fd471333-fc8e-4340-90e0-f2104593a0a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2115acc4-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:ea:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521117, 'reachable_time': 30893, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251630, 'error': None, 'target': 'ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.817 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[d797ba30-14f7-4429-b32d-0a68fc6c87ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.897 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d37dd7-178d-4bcd-b39d-eed3cd4c154b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.907 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2115acc4-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.908 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.908 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2115acc4-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:23 compute-0 kernel: tap2115acc4-d0: entered promiscuous mode
Nov 24 22:29:23 compute-0 nova_compute[189608]: 2025-11-24 22:29:23.911 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:23 compute-0 NetworkManager[56413]: <info>  [1764023363.9140] manager: (tap2115acc4-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.916 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2115acc4-d0, col_values=(('external_ids', {'iface-id': 'c7096241-2beb-47ef-98b0-9df11d4700b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:23 compute-0 nova_compute[189608]: 2025-11-24 22:29:23.917 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:23 compute-0 ovn_controller[97889]: 2025-11-24T22:29:23Z|00075|binding|INFO|Releasing lport c7096241-2beb-47ef-98b0-9df11d4700b3 from this chassis (sb_readonly=0)
Nov 24 22:29:23 compute-0 nova_compute[189608]: 2025-11-24 22:29:23.921 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.921 106776 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2115acc4-d8c6-496a-ad70-3b4d97101ed5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2115acc4-d8c6-496a-ad70-3b4d97101ed5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.922 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8d43c0-9cf5-4195-89a9-e06a675710c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.923 106776 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: global
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     log         /dev/log local0 debug
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     log-tag     haproxy-metadata-proxy-2115acc4-d8c6-496a-ad70-3b4d97101ed5
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     user        root
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     group       root
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     maxconn     1024
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     pidfile     /var/lib/neutron/external/pids/2115acc4-d8c6-496a-ad70-3b4d97101ed5.pid.haproxy
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     daemon
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: defaults
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     log global
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     mode http
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     option httplog
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     option dontlognull
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     option http-server-close
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     option forwardfor
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     retries                 3
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     timeout http-request    30s
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     timeout connect         30s
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     timeout client          32s
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     timeout server          32s
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     timeout http-keep-alive 30s
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: listen listener
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     bind 169.254.169.254:80
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:     http-request add-header X-OVN-Network-ID 2115acc4-d8c6-496a-ad70-3b4d97101ed5
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 22:29:23 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:23.924 106776 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5', 'env', 'PROCESS_TAG=haproxy-2115acc4-d8c6-496a-ad70-3b4d97101ed5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2115acc4-d8c6-496a-ad70-3b4d97101ed5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 22:29:23 compute-0 nova_compute[189608]: 2025-11-24 22:29:23.932 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:23 compute-0 nova_compute[189608]: 2025-11-24 22:29:23.979 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.326 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023364.323613, 54138d77-c8e9-474f-8c09-e7f7cb37cbaa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.329 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] VM Started (Lifecycle Event)
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.360 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.366 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023364.3239255, 54138d77-c8e9-474f-8c09-e7f7cb37cbaa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.366 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] VM Paused (Lifecycle Event)
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.389 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.394 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:29:24 compute-0 podman[251669]: 2025-11-24 22:29:24.400477163 +0000 UTC m=+0.078492710 container create 4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.422 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:29:24 compute-0 podman[251669]: 2025-11-24 22:29:24.361786891 +0000 UTC m=+0.039802488 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 22:29:24 compute-0 systemd[1]: Started libpod-conmon-4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b.scope.
Nov 24 22:29:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 22:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2559dda6481eb9c9011da8fcffec0f2ed55911c14f122559d00c29e8bef26418/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 22:29:24 compute-0 podman[251669]: 2025-11-24 22:29:24.527879671 +0000 UTC m=+0.205895258 container init 4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 22:29:24 compute-0 podman[251669]: 2025-11-24 22:29:24.535663904 +0000 UTC m=+0.213679461 container start 4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 24 22:29:24 compute-0 neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5[251683]: [NOTICE]   (251687) : New worker (251689) forked
Nov 24 22:29:24 compute-0 neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5[251683]: [NOTICE]   (251687) : Loading success.
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.795 189613 DEBUG nova.compute.manager [req-d5dfb5d7-b7dc-4d41-84a2-d55390582eb8 req-d3b4b019-7a2a-4a57-a880-19cf061aefda c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Received event network-vif-plugged-50a96df6-c0e9-444e-a10a-8d2747085ccf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.795 189613 DEBUG oslo_concurrency.lockutils [req-d5dfb5d7-b7dc-4d41-84a2-d55390582eb8 req-d3b4b019-7a2a-4a57-a880-19cf061aefda c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.796 189613 DEBUG oslo_concurrency.lockutils [req-d5dfb5d7-b7dc-4d41-84a2-d55390582eb8 req-d3b4b019-7a2a-4a57-a880-19cf061aefda c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.796 189613 DEBUG oslo_concurrency.lockutils [req-d5dfb5d7-b7dc-4d41-84a2-d55390582eb8 req-d3b4b019-7a2a-4a57-a880-19cf061aefda c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.797 189613 DEBUG nova.compute.manager [req-d5dfb5d7-b7dc-4d41-84a2-d55390582eb8 req-d3b4b019-7a2a-4a57-a880-19cf061aefda c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Processing event network-vif-plugged-50a96df6-c0e9-444e-a10a-8d2747085ccf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.798 189613 DEBUG nova.compute.manager [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.805 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023364.8052099, 54138d77-c8e9-474f-8c09-e7f7cb37cbaa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.806 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] VM Resumed (Lifecycle Event)
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.809 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.811 189613 DEBUG nova.network.neutron [req-cd79854e-f171-4cd0-a4ab-c83c989e72b4 req-f6f0fb3a-c07f-4c56-aa35-2bc4fd864bf3 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Updated VIF entry in instance network info cache for port 50a96df6-c0e9-444e-a10a-8d2747085ccf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.812 189613 DEBUG nova.network.neutron [req-cd79854e-f171-4cd0-a4ab-c83c989e72b4 req-f6f0fb3a-c07f-4c56-aa35-2bc4fd864bf3 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Updating instance_info_cache with network_info: [{"id": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "address": "fa:16:3e:09:f0:5f", "network": {"id": "2115acc4-d8c6-496a-ad70-3b4d97101ed5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-312324450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92c99dbee50c412a8684f6b46045b25e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a96df6-c0", "ovs_interfaceid": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.823 189613 INFO nova.virt.libvirt.driver [-] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Instance spawned successfully.
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.824 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.840 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.845 189613 DEBUG oslo_concurrency.lockutils [req-cd79854e-f171-4cd0-a4ab-c83c989e72b4 req-f6f0fb3a-c07f-4c56-aa35-2bc4fd864bf3 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-54138d77-c8e9-474f-8c09-e7f7cb37cbaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.851 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.856 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.856 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.857 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.858 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.858 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.859 189613 DEBUG nova.virt.libvirt.driver [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.901 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.936 189613 INFO nova.compute.manager [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Took 8.76 seconds to spawn the instance on the hypervisor.
Nov 24 22:29:24 compute-0 nova_compute[189608]: 2025-11-24 22:29:24.937 189613 DEBUG nova.compute.manager [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:29:25 compute-0 nova_compute[189608]: 2025-11-24 22:29:25.014 189613 INFO nova.compute.manager [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Took 9.58 seconds to build instance.
Nov 24 22:29:25 compute-0 nova_compute[189608]: 2025-11-24 22:29:25.032 189613 DEBUG oslo_concurrency.lockutils [None req-c8d498da-a440-46cb-8a16-a13533fb9f6f 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:26 compute-0 podman[251699]: 2025-11-24 22:29:26.592803743 +0000 UTC m=+0.122315071 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:29:26 compute-0 podman[251698]: 2025-11-24 22:29:26.638015748 +0000 UTC m=+0.159988122 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:29:26 compute-0 ovn_controller[97889]: 2025-11-24T22:29:26Z|00076|binding|INFO|Releasing lport c7096241-2beb-47ef-98b0-9df11d4700b3 from this chassis (sb_readonly=0)
Nov 24 22:29:26 compute-0 ovn_controller[97889]: 2025-11-24T22:29:26Z|00077|binding|INFO|Releasing lport 467754a3-548e-4841-9628-d4a6a4daa2bb from this chassis (sb_readonly=0)
Nov 24 22:29:26 compute-0 nova_compute[189608]: 2025-11-24 22:29:26.895 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:26 compute-0 nova_compute[189608]: 2025-11-24 22:29:26.989 189613 DEBUG nova.compute.manager [req-ec1b9c45-1ab8-4d46-b90e-ffa60818dc4e req-e01a2c94-e5a1-4848-9aa5-3c51e6793278 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Received event network-vif-plugged-50a96df6-c0e9-444e-a10a-8d2747085ccf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:29:26 compute-0 nova_compute[189608]: 2025-11-24 22:29:26.991 189613 DEBUG oslo_concurrency.lockutils [req-ec1b9c45-1ab8-4d46-b90e-ffa60818dc4e req-e01a2c94-e5a1-4848-9aa5-3c51e6793278 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:26 compute-0 nova_compute[189608]: 2025-11-24 22:29:26.993 189613 DEBUG oslo_concurrency.lockutils [req-ec1b9c45-1ab8-4d46-b90e-ffa60818dc4e req-e01a2c94-e5a1-4848-9aa5-3c51e6793278 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:26 compute-0 nova_compute[189608]: 2025-11-24 22:29:26.994 189613 DEBUG oslo_concurrency.lockutils [req-ec1b9c45-1ab8-4d46-b90e-ffa60818dc4e req-e01a2c94-e5a1-4848-9aa5-3c51e6793278 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:26 compute-0 nova_compute[189608]: 2025-11-24 22:29:26.994 189613 DEBUG nova.compute.manager [req-ec1b9c45-1ab8-4d46-b90e-ffa60818dc4e req-e01a2c94-e5a1-4848-9aa5-3c51e6793278 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] No waiting events found dispatching network-vif-plugged-50a96df6-c0e9-444e-a10a-8d2747085ccf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:29:26 compute-0 nova_compute[189608]: 2025-11-24 22:29:26.995 189613 WARNING nova.compute.manager [req-ec1b9c45-1ab8-4d46-b90e-ffa60818dc4e req-e01a2c94-e5a1-4848-9aa5-3c51e6793278 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Received unexpected event network-vif-plugged-50a96df6-c0e9-444e-a10a-8d2747085ccf for instance with vm_state active and task_state None.
Nov 24 22:29:27 compute-0 ovn_controller[97889]: 2025-11-24T22:29:27Z|00078|binding|INFO|Releasing lport c7096241-2beb-47ef-98b0-9df11d4700b3 from this chassis (sb_readonly=0)
Nov 24 22:29:27 compute-0 ovn_controller[97889]: 2025-11-24T22:29:27Z|00079|binding|INFO|Releasing lport 467754a3-548e-4841-9628-d4a6a4daa2bb from this chassis (sb_readonly=0)
Nov 24 22:29:27 compute-0 nova_compute[189608]: 2025-11-24 22:29:27.094 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:27 compute-0 nova_compute[189608]: 2025-11-24 22:29:27.349 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:27 compute-0 NetworkManager[56413]: <info>  [1764023367.3520] manager: (patch-br-int-to-provnet-4ab70a2e-e19a-4436-8557-6ae686d167d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 24 22:29:27 compute-0 NetworkManager[56413]: <info>  [1764023367.3578] manager: (patch-provnet-4ab70a2e-e19a-4436-8557-6ae686d167d5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Nov 24 22:29:27 compute-0 ovn_controller[97889]: 2025-11-24T22:29:27Z|00080|binding|INFO|Releasing lport c7096241-2beb-47ef-98b0-9df11d4700b3 from this chassis (sb_readonly=0)
Nov 24 22:29:27 compute-0 nova_compute[189608]: 2025-11-24 22:29:27.496 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:27 compute-0 ovn_controller[97889]: 2025-11-24T22:29:27Z|00081|binding|INFO|Releasing lport 467754a3-548e-4841-9628-d4a6a4daa2bb from this chassis (sb_readonly=0)
Nov 24 22:29:27 compute-0 nova_compute[189608]: 2025-11-24 22:29:27.514 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:27 compute-0 nova_compute[189608]: 2025-11-24 22:29:27.536 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:27 compute-0 nova_compute[189608]: 2025-11-24 22:29:27.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:29:27 compute-0 nova_compute[189608]: 2025-11-24 22:29:27.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:29:27 compute-0 nova_compute[189608]: 2025-11-24 22:29:27.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:29:28 compute-0 nova_compute[189608]: 2025-11-24 22:29:28.366 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:29:28 compute-0 nova_compute[189608]: 2025-11-24 22:29:28.367 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:29:28 compute-0 nova_compute[189608]: 2025-11-24 22:29:28.367 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:29:28 compute-0 nova_compute[189608]: 2025-11-24 22:29:28.367 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:29:28 compute-0 nova_compute[189608]: 2025-11-24 22:29:28.584 189613 DEBUG nova.compute.manager [req-77413af4-0ef3-4f0a-92e4-18d8f8b4caf6 req-43e4eef4-fdf7-4946-9e35-65163d81948c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Received event network-changed-e0390902-6ae3-485d-b497-f57a8cca001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:29:28 compute-0 nova_compute[189608]: 2025-11-24 22:29:28.585 189613 DEBUG nova.compute.manager [req-77413af4-0ef3-4f0a-92e4-18d8f8b4caf6 req-43e4eef4-fdf7-4946-9e35-65163d81948c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Refreshing instance network info cache due to event network-changed-e0390902-6ae3-485d-b497-f57a8cca001c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:29:28 compute-0 nova_compute[189608]: 2025-11-24 22:29:28.585 189613 DEBUG oslo_concurrency.lockutils [req-77413af4-0ef3-4f0a-92e4-18d8f8b4caf6 req-43e4eef4-fdf7-4946-9e35-65163d81948c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:29:28 compute-0 nova_compute[189608]: 2025-11-24 22:29:28.982 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:29 compute-0 nova_compute[189608]: 2025-11-24 22:29:29.423 189613 DEBUG nova.compute.manager [req-babfde65-13f3-4ec6-a510-d9753323d440 req-b49858fc-1c8e-495a-864c-3ef169fc3973 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Received event network-changed-50a96df6-c0e9-444e-a10a-8d2747085ccf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:29:29 compute-0 nova_compute[189608]: 2025-11-24 22:29:29.423 189613 DEBUG nova.compute.manager [req-babfde65-13f3-4ec6-a510-d9753323d440 req-b49858fc-1c8e-495a-864c-3ef169fc3973 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Refreshing instance network info cache due to event network-changed-50a96df6-c0e9-444e-a10a-8d2747085ccf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:29:29 compute-0 nova_compute[189608]: 2025-11-24 22:29:29.423 189613 DEBUG oslo_concurrency.lockutils [req-babfde65-13f3-4ec6-a510-d9753323d440 req-b49858fc-1c8e-495a-864c-3ef169fc3973 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-54138d77-c8e9-474f-8c09-e7f7cb37cbaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:29:29 compute-0 nova_compute[189608]: 2025-11-24 22:29:29.423 189613 DEBUG oslo_concurrency.lockutils [req-babfde65-13f3-4ec6-a510-d9753323d440 req-b49858fc-1c8e-495a-864c-3ef169fc3973 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-54138d77-c8e9-474f-8c09-e7f7cb37cbaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:29:29 compute-0 nova_compute[189608]: 2025-11-24 22:29:29.424 189613 DEBUG nova.network.neutron [req-babfde65-13f3-4ec6-a510-d9753323d440 req-b49858fc-1c8e-495a-864c-3ef169fc3973 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Refreshing network info cache for port 50a96df6-c0e9-444e-a10a-8d2747085ccf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:29:29 compute-0 nova_compute[189608]: 2025-11-24 22:29:29.501 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:29 compute-0 podman[203795]: time="2025-11-24T22:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:29:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30758 "" "Go-http-client/1.1"
Nov 24 22:29:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5269 "" "Go-http-client/1.1"
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.343 189613 DEBUG oslo_concurrency.lockutils [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Acquiring lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.345 189613 DEBUG oslo_concurrency.lockutils [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.346 189613 DEBUG oslo_concurrency.lockutils [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Acquiring lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.347 189613 DEBUG oslo_concurrency.lockutils [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.347 189613 DEBUG oslo_concurrency.lockutils [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.350 189613 INFO nova.compute.manager [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Terminating instance
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.352 189613 DEBUG nova.compute.manager [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:29:31 compute-0 kernel: tap50a96df6-c0 (unregistering): left promiscuous mode
Nov 24 22:29:31 compute-0 NetworkManager[56413]: <info>  [1764023371.4110] device (tap50a96df6-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:29:31 compute-0 ovn_controller[97889]: 2025-11-24T22:29:31Z|00082|binding|INFO|Releasing lport 50a96df6-c0e9-444e-a10a-8d2747085ccf from this chassis (sb_readonly=0)
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.414 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:31 compute-0 ovn_controller[97889]: 2025-11-24T22:29:31Z|00083|binding|INFO|Setting lport 50a96df6-c0e9-444e-a10a-8d2747085ccf down in Southbound
Nov 24 22:29:31 compute-0 ovn_controller[97889]: 2025-11-24T22:29:31Z|00084|binding|INFO|Removing iface tap50a96df6-c0 ovn-installed in OVS
Nov 24 22:29:31 compute-0 openstack_network_exporter[205945]: ERROR   22:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:29:31 compute-0 openstack_network_exporter[205945]: ERROR   22:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:29:31 compute-0 openstack_network_exporter[205945]: ERROR   22:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:29:31 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:31.425 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:f0:5f 10.100.0.4'], port_security=['fa:16:3e:09:f0:5f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '54138d77-c8e9-474f-8c09-e7f7cb37cbaa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2115acc4-d8c6-496a-ad70-3b4d97101ed5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92c99dbee50c412a8684f6b46045b25e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a161dd63-5db1-4976-8ffe-fc92903e38a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c619122e-78c3-4cd7-8428-c42da5043e73, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=50a96df6-c0e9-444e-a10a-8d2747085ccf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:29:31 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:31.429 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 50a96df6-c0e9-444e-a10a-8d2747085ccf in datapath 2115acc4-d8c6-496a-ad70-3b4d97101ed5 unbound from our chassis
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.433 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:31 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:31.433 106776 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2115acc4-d8c6-496a-ad70-3b4d97101ed5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 22:29:31 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:31.435 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[ce794f08-ca0e-416c-9226-0f3bbcd8dcc4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:31 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:31.437 106776 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5 namespace which is not needed anymore
Nov 24 22:29:31 compute-0 openstack_network_exporter[205945]: ERROR   22:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:29:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.450 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:31 compute-0 openstack_network_exporter[205945]: ERROR   22:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:29:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:29:31 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 24 22:29:31 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 7.680s CPU time.
Nov 24 22:29:31 compute-0 systemd-machined[155884]: Machine qemu-7-instance-00000007 terminated.
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.650 189613 INFO nova.virt.libvirt.driver [-] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Instance destroyed successfully.
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.651 189613 DEBUG nova.objects.instance [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lazy-loading 'resources' on Instance uuid 54138d77-c8e9-474f-8c09-e7f7cb37cbaa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.703 189613 DEBUG nova.virt.libvirt.vif [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:29:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1330136782',display_name='tempest-ServersTestManualDisk-server-1330136782',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1330136782',id=7,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE2rR8EJVS2XFZhYBVxQE65O/XakZyGehGDzzcXRQvdzCvURgilOUpVaeNDlQDHNg2ecl0skHSbQrOHuYiOdGVrqUjR/pTiJGpcmn7MeRhN4XP1fh58EmZy6I49MehGoow==',key_name='tempest-keypair-962835102',keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:29:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92c99dbee50c412a8684f6b46045b25e',ramdisk_id='',reservation_id='r-jk3lqaxp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-933824192',owner_user_name='tempest-ServersTestManualDisk-933824192-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:29:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='60bec8f905ce42c189140229b54eb832',uuid=54138d77-c8e9-474f-8c09-e7f7cb37cbaa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "address": "fa:16:3e:09:f0:5f", "network": {"id": "2115acc4-d8c6-496a-ad70-3b4d97101ed5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-312324450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92c99dbee50c412a8684f6b46045b25e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a96df6-c0", "ovs_interfaceid": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.704 189613 DEBUG nova.network.os_vif_util [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Converting VIF {"id": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "address": "fa:16:3e:09:f0:5f", "network": {"id": "2115acc4-d8c6-496a-ad70-3b4d97101ed5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-312324450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92c99dbee50c412a8684f6b46045b25e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a96df6-c0", "ovs_interfaceid": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.706 189613 DEBUG nova.network.os_vif_util [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:f0:5f,bridge_name='br-int',has_traffic_filtering=True,id=50a96df6-c0e9-444e-a10a-8d2747085ccf,network=Network(2115acc4-d8c6-496a-ad70-3b4d97101ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a96df6-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.707 189613 DEBUG os_vif [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:f0:5f,bridge_name='br-int',has_traffic_filtering=True,id=50a96df6-c0e9-444e-a10a-8d2747085ccf,network=Network(2115acc4-d8c6-496a-ad70-3b4d97101ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a96df6-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.710 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.711 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50a96df6-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.713 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.715 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.720 189613 INFO os_vif [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:f0:5f,bridge_name='br-int',has_traffic_filtering=True,id=50a96df6-c0e9-444e-a10a-8d2747085ccf,network=Network(2115acc4-d8c6-496a-ad70-3b4d97101ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a96df6-c0')
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.722 189613 INFO nova.virt.libvirt.driver [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Deleting instance files /var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa_del
Nov 24 22:29:31 compute-0 neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5[251683]: [NOTICE]   (251687) : haproxy version is 2.8.14-c23fe91
Nov 24 22:29:31 compute-0 neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5[251683]: [NOTICE]   (251687) : path to executable is /usr/sbin/haproxy
Nov 24 22:29:31 compute-0 neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5[251683]: [WARNING]  (251687) : Exiting Master process...
Nov 24 22:29:31 compute-0 neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5[251683]: [WARNING]  (251687) : Exiting Master process...
Nov 24 22:29:31 compute-0 neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5[251683]: [ALERT]    (251687) : Current worker (251689) exited with code 143 (Terminated)
Nov 24 22:29:31 compute-0 neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5[251683]: [WARNING]  (251687) : All workers exited. Exiting... (0)
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.725 189613 INFO nova.virt.libvirt.driver [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Deletion of /var/lib/nova/instances/54138d77-c8e9-474f-8c09-e7f7cb37cbaa_del complete
Nov 24 22:29:31 compute-0 systemd[1]: libpod-4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b.scope: Deactivated successfully.
Nov 24 22:29:31 compute-0 podman[251775]: 2025-11-24 22:29:31.738526223 +0000 UTC m=+0.094878519 container died 4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 24 22:29:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b-userdata-shm.mount: Deactivated successfully.
Nov 24 22:29:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2559dda6481eb9c9011da8fcffec0f2ed55911c14f122559d00c29e8bef26418-merged.mount: Deactivated successfully.
Nov 24 22:29:31 compute-0 podman[251775]: 2025-11-24 22:29:31.820517731 +0000 UTC m=+0.176869987 container cleanup 4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.824 189613 INFO nova.compute.manager [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Took 0.47 seconds to destroy the instance on the hypervisor.
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.825 189613 DEBUG oslo.service.loopingcall [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.827 189613 DEBUG nova.compute.manager [-] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.827 189613 DEBUG nova.network.neutron [-] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:29:31 compute-0 systemd[1]: libpod-conmon-4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b.scope: Deactivated successfully.
Nov 24 22:29:31 compute-0 podman[251807]: 2025-11-24 22:29:31.931708755 +0000 UTC m=+0.071464922 container remove 4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 24 22:29:31 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:31.945 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[579e2d66-ee4e-4222-ba7c-4b89dc18f3af]: (4, ('Mon Nov 24 10:29:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5 (4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b)\n4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b\nMon Nov 24 10:29:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5 (4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b)\n4449f7a81ad551fb77c6e1e5d384f2a0cc71a1bf928912478f8f1b6ba273e26b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:31 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:31.947 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2e8a7e-9fc5-4e9d-a48e-a14af9e07349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:31 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:31.949 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2115acc4-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:29:31 compute-0 kernel: tap2115acc4-d0: left promiscuous mode
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.951 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:31 compute-0 nova_compute[189608]: 2025-11-24 22:29:31.963 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:31 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:31.967 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[0e03abdf-9a0c-4a93-93e3-ac4ff738fc5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:31 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:31.989 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3e66e2-d667-4aad-a478-b51156433330]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:31 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:31.993 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[2813f554-624f-4efc-b078-3b52c85458c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:32.012 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[a5050978-ca4d-44b4-a7df-2edc1e9d6ff1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521106, 'reachable_time': 28660, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251822, 'error': None, 'target': 'ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:32.016 106891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2115acc4-d8c6-496a-ad70-3b4d97101ed5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 22:29:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:32.016 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[5b06db3b-ba83-49a9-a4ab-72c1cf8381d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:29:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d2115acc4\x2dd8c6\x2d496a\x2dad70\x2d3b4d97101ed5.mount: Deactivated successfully.
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.121 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updating instance_info_cache with network_info: [{"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.144 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.144 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.145 189613 DEBUG oslo_concurrency.lockutils [req-77413af4-0ef3-4f0a-92e4-18d8f8b4caf6 req-43e4eef4-fdf7-4946-9e35-65163d81948c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.146 189613 DEBUG nova.network.neutron [req-77413af4-0ef3-4f0a-92e4-18d8f8b4caf6 req-43e4eef4-fdf7-4946-9e35-65163d81948c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Refreshing network info cache for port e0390902-6ae3-485d-b497-f57a8cca001c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.149 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.184 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.184 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.195 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.196 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.199 189613 DEBUG nova.network.neutron [req-babfde65-13f3-4ec6-a510-d9753323d440 req-b49858fc-1c8e-495a-864c-3ef169fc3973 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Updated VIF entry in instance network info cache for port 50a96df6-c0e9-444e-a10a-8d2747085ccf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.200 189613 DEBUG nova.network.neutron [req-babfde65-13f3-4ec6-a510-d9753323d440 req-b49858fc-1c8e-495a-864c-3ef169fc3973 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Updating instance_info_cache with network_info: [{"id": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "address": "fa:16:3e:09:f0:5f", "network": {"id": "2115acc4-d8c6-496a-ad70-3b4d97101ed5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-312324450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92c99dbee50c412a8684f6b46045b25e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a96df6-c0", "ovs_interfaceid": "50a96df6-c0e9-444e-a10a-8d2747085ccf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.226 189613 DEBUG oslo_concurrency.lockutils [req-babfde65-13f3-4ec6-a510-d9753323d440 req-b49858fc-1c8e-495a-864c-3ef169fc3973 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-54138d77-c8e9-474f-8c09-e7f7cb37cbaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.316 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.383 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.385 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.454 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.996 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.998 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5200MB free_disk=72.16518020629883GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:29:32 compute-0 nova_compute[189608]: 2025-11-24 22:29:32.999 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.000 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.109 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.110 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 54138d77-c8e9-474f-8c09-e7f7cb37cbaa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.110 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.111 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.188 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.206 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.239 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.240 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.283 189613 DEBUG nova.network.neutron [-] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.315 189613 INFO nova.compute.manager [-] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Took 1.49 seconds to deallocate network for instance.
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.377 189613 DEBUG oslo_concurrency.lockutils [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.378 189613 DEBUG oslo_concurrency.lockutils [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.473 189613 DEBUG nova.compute.provider_tree [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.490 189613 DEBUG nova.scheduler.client.report [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.509 189613 DEBUG nova.compute.manager [req-9ffe0a08-d881-4295-a957-1a9fdf3b516a req-1ab085a2-1fc3-4df9-9a66-a3669a243ed5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Received event network-vif-deleted-50a96df6-c0e9-444e-a10a-8d2747085ccf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.519 189613 DEBUG oslo_concurrency.lockutils [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.546 189613 INFO nova.scheduler.client.report [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Deleted allocations for instance 54138d77-c8e9-474f-8c09-e7f7cb37cbaa
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.627 189613 DEBUG oslo_concurrency.lockutils [None req-008ca94f-93e4-422b-ba5d-af71f480a485 60bec8f905ce42c189140229b54eb832 92c99dbee50c412a8684f6b46045b25e - - default default] Lock "54138d77-c8e9-474f-8c09-e7f7cb37cbaa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:33 compute-0 nova_compute[189608]: 2025-11-24 22:29:33.987 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:35 compute-0 nova_compute[189608]: 2025-11-24 22:29:35.213 189613 DEBUG nova.network.neutron [req-77413af4-0ef3-4f0a-92e4-18d8f8b4caf6 req-43e4eef4-fdf7-4946-9e35-65163d81948c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updated VIF entry in instance network info cache for port e0390902-6ae3-485d-b497-f57a8cca001c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:29:35 compute-0 nova_compute[189608]: 2025-11-24 22:29:35.215 189613 DEBUG nova.network.neutron [req-77413af4-0ef3-4f0a-92e4-18d8f8b4caf6 req-43e4eef4-fdf7-4946-9e35-65163d81948c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updating instance_info_cache with network_info: [{"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:29:35 compute-0 nova_compute[189608]: 2025-11-24 22:29:35.258 189613 DEBUG oslo_concurrency.lockutils [req-77413af4-0ef3-4f0a-92e4-18d8f8b4caf6 req-43e4eef4-fdf7-4946-9e35-65163d81948c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:29:35 compute-0 podman[251830]: 2025-11-24 22:29:35.591246356 +0000 UTC m=+0.131025813 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:29:35 compute-0 nova_compute[189608]: 2025-11-24 22:29:35.773 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:35 compute-0 nova_compute[189608]: 2025-11-24 22:29:35.886 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:29:35 compute-0 nova_compute[189608]: 2025-11-24 22:29:35.904 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:29:36 compute-0 nova_compute[189608]: 2025-11-24 22:29:36.716 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:37 compute-0 nova_compute[189608]: 2025-11-24 22:29:37.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:29:38 compute-0 nova_compute[189608]: 2025-11-24 22:29:38.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:29:38 compute-0 nova_compute[189608]: 2025-11-24 22:29:38.990 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:39 compute-0 nova_compute[189608]: 2025-11-24 22:29:39.482 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:39 compute-0 nova_compute[189608]: 2025-11-24 22:29:39.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:29:41 compute-0 ovn_controller[97889]: 2025-11-24T22:29:41Z|00085|binding|INFO|Releasing lport 467754a3-548e-4841-9628-d4a6a4daa2bb from this chassis (sb_readonly=0)
Nov 24 22:29:41 compute-0 nova_compute[189608]: 2025-11-24 22:29:41.455 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:41 compute-0 podman[251854]: 2025-11-24 22:29:41.628417914 +0000 UTC m=+0.159195868 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 22:29:41 compute-0 nova_compute[189608]: 2025-11-24 22:29:41.721 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:41 compute-0 nova_compute[189608]: 2025-11-24 22:29:41.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:29:41 compute-0 nova_compute[189608]: 2025-11-24 22:29:41.844 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:43 compute-0 nova_compute[189608]: 2025-11-24 22:29:43.720 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:43 compute-0 nova_compute[189608]: 2025-11-24 22:29:43.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:29:43 compute-0 nova_compute[189608]: 2025-11-24 22:29:43.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:29:43 compute-0 nova_compute[189608]: 2025-11-24 22:29:43.994 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:44 compute-0 podman[251874]: 2025-11-24 22:29:44.549181619 +0000 UTC m=+0.106760349 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 22:29:44 compute-0 nova_compute[189608]: 2025-11-24 22:29:44.706 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:45 compute-0 nova_compute[189608]: 2025-11-24 22:29:45.572 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:46 compute-0 nova_compute[189608]: 2025-11-24 22:29:46.642 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764023371.6403995, 54138d77-c8e9-474f-8c09-e7f7cb37cbaa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:29:46 compute-0 nova_compute[189608]: 2025-11-24 22:29:46.644 189613 INFO nova.compute.manager [-] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] VM Stopped (Lifecycle Event)
Nov 24 22:29:46 compute-0 nova_compute[189608]: 2025-11-24 22:29:46.677 189613 DEBUG nova.compute.manager [None req-f3d66f81-fd59-4271-b09a-eae71f997172 - - - - - -] [instance: 54138d77-c8e9-474f-8c09-e7f7cb37cbaa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:29:46 compute-0 nova_compute[189608]: 2025-11-24 22:29:46.725 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:48 compute-0 nova_compute[189608]: 2025-11-24 22:29:48.998 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:49 compute-0 podman[251895]: 2025-11-24 22:29:49.566048033 +0000 UTC m=+0.106283543 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 24 22:29:49 compute-0 nova_compute[189608]: 2025-11-24 22:29:49.584 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:49 compute-0 podman[251893]: 2025-11-24 22:29:49.592446133 +0000 UTC m=+0.144110599 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, config_id=edpm, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=)
Nov 24 22:29:49 compute-0 podman[251894]: 2025-11-24 22:29:49.628554766 +0000 UTC m=+0.165808984 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 24 22:29:50 compute-0 nova_compute[189608]: 2025-11-24 22:29:50.550 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:51 compute-0 nova_compute[189608]: 2025-11-24 22:29:51.729 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:54 compute-0 nova_compute[189608]: 2025-11-24 22:29:54.007 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:54 compute-0 podman[251951]: 2025-11-24 22:29:54.529936912 +0000 UTC m=+0.083949299 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:29:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:54.595 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:54.596 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:29:54.597 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:55 compute-0 sshd-session[251949]: Invalid user 0 from 185.217.1.246 port 29924
Nov 24 22:29:56 compute-0 nova_compute[189608]: 2025-11-24 22:29:56.736 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:57 compute-0 podman[251987]: 2025-11-24 22:29:57.585741852 +0000 UTC m=+0.120424963 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 22:29:57 compute-0 podman[251986]: 2025-11-24 22:29:57.63651938 +0000 UTC m=+0.172590824 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.115 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Acquiring lock "57d86171-0790-4408-ae34-dfc07ee52747" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.116 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "57d86171-0790-4408-ae34-dfc07ee52747" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.144 189613 DEBUG nova.compute.manager [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.235 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.238 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:58 compute-0 ovn_controller[97889]: 2025-11-24T22:29:58Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:e8:3d 10.100.0.13
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.259 189613 DEBUG nova.virt.hardware [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:29:58 compute-0 ovn_controller[97889]: 2025-11-24T22:29:58Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:e8:3d 10.100.0.13
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.260 189613 INFO nova.compute.claims [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.457 189613 DEBUG nova.compute.provider_tree [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.475 189613 DEBUG nova.scheduler.client.report [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.509 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.510 189613 DEBUG nova.compute.manager [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.556 189613 DEBUG nova.compute.manager [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.557 189613 DEBUG nova.network.neutron [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.578 189613 INFO nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.602 189613 DEBUG nova.compute.manager [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.836 189613 DEBUG nova.compute.manager [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.839 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.840 189613 INFO nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Creating image(s)
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.842 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Acquiring lock "/var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.843 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "/var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.844 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "/var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.876 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquiring lock "f238e71a-660e-497c-8472-193245387bcf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.877 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.878 189613 DEBUG oslo_concurrency.processutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.914 189613 DEBUG nova.compute.manager [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.955 189613 DEBUG oslo_concurrency.processutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.956 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Acquiring lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.958 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:58 compute-0 nova_compute[189608]: 2025-11-24 22:29:58.982 189613 DEBUG oslo_concurrency.processutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.020 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.029 189613 DEBUG nova.policy [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '304529bdd01048709c29df90922b1b2d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '47b8cf2705154817a1a23039debe2ac1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.049 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.050 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.061 189613 DEBUG nova.virt.hardware [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.061 189613 INFO nova.compute.claims [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.068 189613 DEBUG oslo_concurrency.processutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.070 189613 DEBUG oslo_concurrency.processutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.121 189613 DEBUG oslo_concurrency.processutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk 1073741824" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.123 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.124 189613 DEBUG oslo_concurrency.processutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.195 189613 DEBUG oslo_concurrency.processutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.196 189613 DEBUG nova.virt.disk.api [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Checking if we can resize image /var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.197 189613 DEBUG oslo_concurrency.processutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.270 189613 DEBUG oslo_concurrency.processutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.272 189613 DEBUG nova.virt.disk.api [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Cannot resize image /var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.273 189613 DEBUG nova.objects.instance [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lazy-loading 'migration_context' on Instance uuid 57d86171-0790-4408-ae34-dfc07ee52747 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.282 189613 DEBUG nova.compute.provider_tree [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.301 189613 DEBUG nova.scheduler.client.report [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.310 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.311 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Ensure instance console log exists: /var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.313 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.313 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.314 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.334 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.335 189613 DEBUG nova.compute.manager [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.400 189613 DEBUG nova.compute.manager [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.402 189613 DEBUG nova.network.neutron [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.421 189613 INFO nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.440 189613 DEBUG nova.compute.manager [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.546 189613 DEBUG nova.compute.manager [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.548 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.549 189613 INFO nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Creating image(s)
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.551 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquiring lock "/var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.551 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "/var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.552 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "/var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.570 189613 DEBUG oslo_concurrency.processutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.660 189613 DEBUG oslo_concurrency.processutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.663 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquiring lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.665 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.692 189613 DEBUG oslo_concurrency.processutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.725 189613 DEBUG nova.policy [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11288fa7771048b4a8faf1d6485ab059', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '97e21ffeec1c4428ba3d70499fc3281f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 22:29:59 compute-0 podman[203795]: time="2025-11-24T22:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:29:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:29:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.791 189613 DEBUG oslo_concurrency.processutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.792 189613 DEBUG oslo_concurrency.processutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.869 189613 DEBUG oslo_concurrency.processutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk 1073741824" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.871 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.871 189613 DEBUG oslo_concurrency.processutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.964 189613 DEBUG oslo_concurrency.processutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.965 189613 DEBUG nova.virt.disk.api [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Checking if we can resize image /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:29:59 compute-0 nova_compute[189608]: 2025-11-24 22:29:59.966 189613 DEBUG oslo_concurrency.processutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:00 compute-0 nova_compute[189608]: 2025-11-24 22:30:00.064 189613 DEBUG oslo_concurrency.processutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:00 compute-0 nova_compute[189608]: 2025-11-24 22:30:00.065 189613 DEBUG nova.virt.disk.api [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Cannot resize image /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:30:00 compute-0 nova_compute[189608]: 2025-11-24 22:30:00.066 189613 DEBUG nova.objects.instance [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lazy-loading 'migration_context' on Instance uuid f238e71a-660e-497c-8472-193245387bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:00 compute-0 nova_compute[189608]: 2025-11-24 22:30:00.078 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:30:00 compute-0 nova_compute[189608]: 2025-11-24 22:30:00.078 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Ensure instance console log exists: /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:30:00 compute-0 nova_compute[189608]: 2025-11-24 22:30:00.079 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:00 compute-0 nova_compute[189608]: 2025-11-24 22:30:00.079 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:00 compute-0 nova_compute[189608]: 2025-11-24 22:30:00.079 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:00 compute-0 sshd-session[251949]: Disconnecting invalid user 0 185.217.1.246 port 29924: Change of username or service not allowed: (0,ssh-connection) -> (cirros,ssh-connection) [preauth]
Nov 24 22:30:00 compute-0 nova_compute[189608]: 2025-11-24 22:30:00.863 189613 DEBUG nova.network.neutron [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Successfully created port: 05ab28a1-e08f-4aa8-83d6-671fd5720283 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 22:30:00 compute-0 nova_compute[189608]: 2025-11-24 22:30:00.891 189613 DEBUG nova.network.neutron [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Successfully created port: fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.280 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "cf45f1e3-b80d-4213-80aa-995f57a9a476" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.281 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.302 189613 DEBUG nova.compute.manager [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.373 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.373 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.383 189613 DEBUG nova.virt.hardware [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.384 189613 INFO nova.compute.claims [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:30:01 compute-0 openstack_network_exporter[205945]: ERROR   22:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:30:01 compute-0 openstack_network_exporter[205945]: ERROR   22:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:30:01 compute-0 openstack_network_exporter[205945]: ERROR   22:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:30:01 compute-0 openstack_network_exporter[205945]: ERROR   22:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:30:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:30:01 compute-0 openstack_network_exporter[205945]: ERROR   22:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:30:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.599 189613 DEBUG nova.compute.provider_tree [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.623 189613 DEBUG nova.scheduler.client.report [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.656 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.657 189613 DEBUG nova.compute.manager [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.711 189613 DEBUG nova.compute.manager [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.713 189613 DEBUG nova.network.neutron [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.740 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.768 189613 INFO nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.796 189613 DEBUG nova.compute.manager [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.897 189613 DEBUG nova.network.neutron [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Successfully updated port: 05ab28a1-e08f-4aa8-83d6-671fd5720283 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.933 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Acquiring lock "refresh_cache-57d86171-0790-4408-ae34-dfc07ee52747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.934 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Acquired lock "refresh_cache-57d86171-0790-4408-ae34-dfc07ee52747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.934 189613 DEBUG nova.network.neutron [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.939 189613 DEBUG nova.compute.manager [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.941 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.942 189613 INFO nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Creating image(s)
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.943 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "/var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.944 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "/var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.945 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "/var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:01 compute-0 nova_compute[189608]: 2025-11-24 22:30:01.975 189613 DEBUG oslo_concurrency.processutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.068 189613 DEBUG oslo_concurrency.processutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.069 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.070 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.079 189613 DEBUG oslo_concurrency.processutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.141 189613 DEBUG oslo_concurrency.processutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.142 189613 DEBUG oslo_concurrency.processutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.198 189613 DEBUG oslo_concurrency.processutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk 1073741824" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.199 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.200 189613 DEBUG oslo_concurrency.processutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.268 189613 DEBUG nova.network.neutron [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.272 189613 DEBUG oslo_concurrency.processutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.273 189613 DEBUG nova.virt.disk.api [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Checking if we can resize image /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.273 189613 DEBUG oslo_concurrency.processutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.306 189613 DEBUG nova.policy [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1599850e48894151b7909b89547cd9e2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ac27d3d1c734f4bab455262f79d3106', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.371 189613 DEBUG oslo_concurrency.processutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.372 189613 DEBUG nova.virt.disk.api [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Cannot resize image /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.373 189613 DEBUG nova.objects.instance [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lazy-loading 'migration_context' on Instance uuid cf45f1e3-b80d-4213-80aa-995f57a9a476 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.387 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.388 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Ensure instance console log exists: /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.389 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.389 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:02 compute-0 nova_compute[189608]: 2025-11-24 22:30:02.390 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:03 compute-0 nova_compute[189608]: 2025-11-24 22:30:03.084 189613 DEBUG nova.compute.manager [req-7e52c72f-acc5-4cb9-9c55-6aa986cea52e req-3b42ef8b-bc88-4a52-8da7-9332fe65589d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Received event network-changed-05ab28a1-e08f-4aa8-83d6-671fd5720283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:03 compute-0 nova_compute[189608]: 2025-11-24 22:30:03.085 189613 DEBUG nova.compute.manager [req-7e52c72f-acc5-4cb9-9c55-6aa986cea52e req-3b42ef8b-bc88-4a52-8da7-9332fe65589d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Refreshing instance network info cache due to event network-changed-05ab28a1-e08f-4aa8-83d6-671fd5720283. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:30:03 compute-0 nova_compute[189608]: 2025-11-24 22:30:03.085 189613 DEBUG oslo_concurrency.lockutils [req-7e52c72f-acc5-4cb9-9c55-6aa986cea52e req-3b42ef8b-bc88-4a52-8da7-9332fe65589d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-57d86171-0790-4408-ae34-dfc07ee52747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:03 compute-0 nova_compute[189608]: 2025-11-24 22:30:03.123 189613 DEBUG nova.network.neutron [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Successfully updated port: fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:30:03 compute-0 nova_compute[189608]: 2025-11-24 22:30:03.151 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquiring lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:03 compute-0 nova_compute[189608]: 2025-11-24 22:30:03.151 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquired lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:03 compute-0 nova_compute[189608]: 2025-11-24 22:30:03.151 189613 DEBUG nova.network.neutron [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:30:03 compute-0 nova_compute[189608]: 2025-11-24 22:30:03.373 189613 DEBUG nova.network.neutron [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:30:03 compute-0 nova_compute[189608]: 2025-11-24 22:30:03.409 189613 DEBUG nova.compute.manager [req-d6cf747e-5b34-4c8f-9483-79a5debe84da req-9868d36d-68cf-44a8-b8b1-0ba331da8288 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received event network-changed-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:03 compute-0 nova_compute[189608]: 2025-11-24 22:30:03.411 189613 DEBUG nova.compute.manager [req-d6cf747e-5b34-4c8f-9483-79a5debe84da req-9868d36d-68cf-44a8-b8b1-0ba331da8288 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Refreshing instance network info cache due to event network-changed-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:30:03 compute-0 nova_compute[189608]: 2025-11-24 22:30:03.412 189613 DEBUG oslo_concurrency.lockutils [req-d6cf747e-5b34-4c8f-9483-79a5debe84da req-9868d36d-68cf-44a8-b8b1-0ba331da8288 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:04 compute-0 nova_compute[189608]: 2025-11-24 22:30:04.012 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.265 189613 DEBUG nova.network.neutron [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Successfully created port: 8f00051e-bd87-48eb-aba6-5dbf3d527aef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.359 189613 DEBUG nova.network.neutron [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Updating instance_info_cache with network_info: [{"id": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "address": "fa:16:3e:ec:6d:17", "network": {"id": "f12ed9ff-32cf-41a2-a508-d96ae5468fa1", "bridge": "br-int", "label": "tempest-ServersTestJSON-2010443651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47b8cf2705154817a1a23039debe2ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ab28a1-e0", "ovs_interfaceid": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.385 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Releasing lock "refresh_cache-57d86171-0790-4408-ae34-dfc07ee52747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.385 189613 DEBUG nova.compute.manager [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Instance network_info: |[{"id": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "address": "fa:16:3e:ec:6d:17", "network": {"id": "f12ed9ff-32cf-41a2-a508-d96ae5468fa1", "bridge": "br-int", "label": "tempest-ServersTestJSON-2010443651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47b8cf2705154817a1a23039debe2ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ab28a1-e0", "ovs_interfaceid": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.386 189613 DEBUG oslo_concurrency.lockutils [req-7e52c72f-acc5-4cb9-9c55-6aa986cea52e req-3b42ef8b-bc88-4a52-8da7-9332fe65589d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-57d86171-0790-4408-ae34-dfc07ee52747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.387 189613 DEBUG nova.network.neutron [req-7e52c72f-acc5-4cb9-9c55-6aa986cea52e req-3b42ef8b-bc88-4a52-8da7-9332fe65589d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Refreshing network info cache for port 05ab28a1-e08f-4aa8-83d6-671fd5720283 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.392 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Start _get_guest_xml network_info=[{"id": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "address": "fa:16:3e:ec:6d:17", "network": {"id": "f12ed9ff-32cf-41a2-a508-d96ae5468fa1", "bridge": "br-int", "label": "tempest-ServersTestJSON-2010443651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47b8cf2705154817a1a23039debe2ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ab28a1-e0", "ovs_interfaceid": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.405 189613 WARNING nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.415 189613 DEBUG nova.virt.libvirt.host [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.416 189613 DEBUG nova.virt.libvirt.host [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.430 189613 DEBUG nova.virt.libvirt.host [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.431 189613 DEBUG nova.virt.libvirt.host [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.431 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.431 189613 DEBUG nova.virt.hardware [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:28:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a49f1e6c-1051-4dea-812e-0063121444a0',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.432 189613 DEBUG nova.virt.hardware [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.432 189613 DEBUG nova.virt.hardware [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.432 189613 DEBUG nova.virt.hardware [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.433 189613 DEBUG nova.virt.hardware [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.433 189613 DEBUG nova.virt.hardware [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.433 189613 DEBUG nova.virt.hardware [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.433 189613 DEBUG nova.virt.hardware [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.434 189613 DEBUG nova.virt.hardware [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.434 189613 DEBUG nova.virt.hardware [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.434 189613 DEBUG nova.virt.hardware [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.438 189613 DEBUG nova.virt.libvirt.vif [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:29:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-844916333',display_name='tempest-ServersTestJSON-server-844916333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-844916333',id=8,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK25sTgX27Ma/LFp7U6VDvFz8O1g4Du+V6L6YZGteUb6y5vGbgt4y46Su5lnL+FrhAwgjJ2IluYL+af3YtR+cgttH5w9PVzWoaxuNx/ODVCLQ6defQSD9k9PpCi90uV0og==',key_name='tempest-keypair-712078251',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='47b8cf2705154817a1a23039debe2ac1',ramdisk_id='',reservation_id='r-myjmvl4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-891938329',owner_user_name='tempest-ServersTestJSON-891938329-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:29:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='304529bdd01048709c29df90922b1b2d',uuid=57d86171-0790-4408-ae34-dfc07ee52747,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "address": "fa:16:3e:ec:6d:17", "network": {"id": "f12ed9ff-32cf-41a2-a508-d96ae5468fa1", "bridge": "br-int", "label": "tempest-ServersTestJSON-2010443651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47b8cf2705154817a1a23039debe2ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ab28a1-e0", "ovs_interfaceid": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.438 189613 DEBUG nova.network.os_vif_util [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Converting VIF {"id": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "address": "fa:16:3e:ec:6d:17", "network": {"id": "f12ed9ff-32cf-41a2-a508-d96ae5468fa1", "bridge": "br-int", "label": "tempest-ServersTestJSON-2010443651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47b8cf2705154817a1a23039debe2ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ab28a1-e0", "ovs_interfaceid": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.439 189613 DEBUG nova.network.os_vif_util [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=05ab28a1-e08f-4aa8-83d6-671fd5720283,network=Network(f12ed9ff-32cf-41a2-a508-d96ae5468fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ab28a1-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.440 189613 DEBUG nova.objects.instance [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 57d86171-0790-4408-ae34-dfc07ee52747 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.454 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <uuid>57d86171-0790-4408-ae34-dfc07ee52747</uuid>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <name>instance-00000008</name>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <memory>131072</memory>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:name>tempest-ServersTestJSON-server-844916333</nova:name>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:30:05</nova:creationTime>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:flavor name="m1.nano">
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:memory>128</nova:memory>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:user uuid="304529bdd01048709c29df90922b1b2d">tempest-ServersTestJSON-891938329-project-member</nova:user>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:project uuid="47b8cf2705154817a1a23039debe2ac1">tempest-ServersTestJSON-891938329</nova:project>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="ec71d7d5-c197-4331-bf8d-e2de71a8419f"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:port uuid="05ab28a1-e08f-4aa8-83d6-671fd5720283">
Nov 24 22:30:05 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <system>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <entry name="serial">57d86171-0790-4408-ae34-dfc07ee52747</entry>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <entry name="uuid">57d86171-0790-4408-ae34-dfc07ee52747</entry>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </system>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <os>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </os>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <features>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </features>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk.config"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:ec:6d:17"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <target dev="tap05ab28a1-e0"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/console.log" append="off"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <video>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </video>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:30:05 compute-0 nova_compute[189608]: </domain>
Nov 24 22:30:05 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.455 189613 DEBUG nova.compute.manager [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Preparing to wait for external event network-vif-plugged-05ab28a1-e08f-4aa8-83d6-671fd5720283 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.455 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Acquiring lock "57d86171-0790-4408-ae34-dfc07ee52747-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.456 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "57d86171-0790-4408-ae34-dfc07ee52747-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.456 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "57d86171-0790-4408-ae34-dfc07ee52747-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.457 189613 DEBUG nova.virt.libvirt.vif [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:29:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-844916333',display_name='tempest-ServersTestJSON-server-844916333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-844916333',id=8,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK25sTgX27Ma/LFp7U6VDvFz8O1g4Du+V6L6YZGteUb6y5vGbgt4y46Su5lnL+FrhAwgjJ2IluYL+af3YtR+cgttH5w9PVzWoaxuNx/ODVCLQ6defQSD9k9PpCi90uV0og==',key_name='tempest-keypair-712078251',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='47b8cf2705154817a1a23039debe2ac1',ramdisk_id='',reservation_id='r-myjmvl4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-891938329',owner_user_name='tempest-ServersTestJSON-891938329-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:29:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='304529bdd01048709c29df90922b1b2d',uuid=57d86171-0790-4408-ae34-dfc07ee52747,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "address": "fa:16:3e:ec:6d:17", "network": {"id": "f12ed9ff-32cf-41a2-a508-d96ae5468fa1", "bridge": "br-int", "label": "tempest-ServersTestJSON-2010443651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47b8cf2705154817a1a23039debe2ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ab28a1-e0", "ovs_interfaceid": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.457 189613 DEBUG nova.network.os_vif_util [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Converting VIF {"id": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "address": "fa:16:3e:ec:6d:17", "network": {"id": "f12ed9ff-32cf-41a2-a508-d96ae5468fa1", "bridge": "br-int", "label": "tempest-ServersTestJSON-2010443651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47b8cf2705154817a1a23039debe2ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ab28a1-e0", "ovs_interfaceid": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.458 189613 DEBUG nova.network.os_vif_util [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=05ab28a1-e08f-4aa8-83d6-671fd5720283,network=Network(f12ed9ff-32cf-41a2-a508-d96ae5468fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ab28a1-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.458 189613 DEBUG os_vif [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=05ab28a1-e08f-4aa8-83d6-671fd5720283,network=Network(f12ed9ff-32cf-41a2-a508-d96ae5468fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ab28a1-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.459 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.459 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.460 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.464 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.464 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05ab28a1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.465 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05ab28a1-e0, col_values=(('external_ids', {'iface-id': '05ab28a1-e08f-4aa8-83d6-671fd5720283', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:6d:17', 'vm-uuid': '57d86171-0790-4408-ae34-dfc07ee52747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:05 compute-0 NetworkManager[56413]: <info>  [1764023405.4677] manager: (tap05ab28a1-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.466 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.470 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.483 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.484 189613 INFO os_vif [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=05ab28a1-e08f-4aa8-83d6-671fd5720283,network=Network(f12ed9ff-32cf-41a2-a508-d96ae5468fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ab28a1-e0')
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.567 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.568 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.568 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] No VIF found with MAC fa:16:3e:ec:6d:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.569 189613 INFO nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Using config drive
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.579 189613 DEBUG nova.network.neutron [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Updating instance_info_cache with network_info: [{"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.614 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Releasing lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.615 189613 DEBUG nova.compute.manager [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Instance network_info: |[{"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.618 189613 DEBUG oslo_concurrency.lockutils [req-d6cf747e-5b34-4c8f-9483-79a5debe84da req-9868d36d-68cf-44a8-b8b1-0ba331da8288 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.619 189613 DEBUG nova.network.neutron [req-d6cf747e-5b34-4c8f-9483-79a5debe84da req-9868d36d-68cf-44a8-b8b1-0ba331da8288 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Refreshing network info cache for port fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.624 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Start _get_guest_xml network_info=[{"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.636 189613 WARNING nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.657 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "a3bee9ba-6618-44bd-a443-da9fff6862a9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.658 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.663 189613 DEBUG nova.virt.libvirt.host [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.665 189613 DEBUG nova.virt.libvirt.host [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.673 189613 DEBUG nova.virt.libvirt.host [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.674 189613 DEBUG nova.virt.libvirt.host [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.674 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.675 189613 DEBUG nova.virt.hardware [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:28:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a49f1e6c-1051-4dea-812e-0063121444a0',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.675 189613 DEBUG nova.virt.hardware [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.675 189613 DEBUG nova.virt.hardware [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.676 189613 DEBUG nova.virt.hardware [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.676 189613 DEBUG nova.virt.hardware [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.676 189613 DEBUG nova.virt.hardware [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.676 189613 DEBUG nova.virt.hardware [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.677 189613 DEBUG nova.virt.hardware [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.677 189613 DEBUG nova.virt.hardware [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.677 189613 DEBUG nova.virt.hardware [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.677 189613 DEBUG nova.virt.hardware [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.682 189613 DEBUG nova.virt.libvirt.vif [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:29:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1585588029',display_name='tempest-ServerActionsTestJSON-server-1585588029',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1585588029',id=9,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLE9GM2vVS7DtVhD6R5uAcKdwWIHZiUIj0cZuSYgN8E0Q128lQ7w/rrfvzePQt5xD3e+tmmR17Qm6/SP88RdZiNDkcZe488bZoDDPSOfWrMiNmhlRVlcu8KaGfz+0SLYw==',key_name='tempest-keypair-731506490',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='97e21ffeec1c4428ba3d70499fc3281f',ramdisk_id='',reservation_id='r-0mavx5gw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-2097692874',owner_user_name='tempest-ServerActionsTestJSON-2097692874-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:29:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11288fa7771048b4a8faf1d6485ab059',uuid=f238e71a-660e-497c-8472-193245387bcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.682 189613 DEBUG nova.network.os_vif_util [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Converting VIF {"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.683 189613 DEBUG nova.network.os_vif_util [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.684 189613 DEBUG nova.objects.instance [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lazy-loading 'pci_devices' on Instance uuid f238e71a-660e-497c-8472-193245387bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.687 189613 DEBUG nova.compute.manager [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.706 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <uuid>f238e71a-660e-497c-8472-193245387bcf</uuid>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <name>instance-00000009</name>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <memory>131072</memory>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:name>tempest-ServerActionsTestJSON-server-1585588029</nova:name>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:30:05</nova:creationTime>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:flavor name="m1.nano">
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:memory>128</nova:memory>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:user uuid="11288fa7771048b4a8faf1d6485ab059">tempest-ServerActionsTestJSON-2097692874-project-member</nova:user>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:project uuid="97e21ffeec1c4428ba3d70499fc3281f">tempest-ServerActionsTestJSON-2097692874</nova:project>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="ec71d7d5-c197-4331-bf8d-e2de71a8419f"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         <nova:port uuid="fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13">
Nov 24 22:30:05 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <system>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <entry name="serial">f238e71a-660e-497c-8472-193245387bcf</entry>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <entry name="uuid">f238e71a-660e-497c-8472-193245387bcf</entry>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </system>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <os>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </os>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <features>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </features>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk.config"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:40:76:1e"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <target dev="tapfdd48bd9-f9"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/console.log" append="off"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <video>
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </video>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:30:05 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:30:05 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:30:05 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:30:05 compute-0 nova_compute[189608]: </domain>
Nov 24 22:30:05 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.707 189613 DEBUG nova.compute.manager [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Preparing to wait for external event network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.707 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquiring lock "f238e71a-660e-497c-8472-193245387bcf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.708 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.708 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.709 189613 DEBUG nova.virt.libvirt.vif [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:29:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1585588029',display_name='tempest-ServerActionsTestJSON-server-1585588029',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1585588029',id=9,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLE9GM2vVS7DtVhD6R5uAcKdwWIHZiUIj0cZuSYgN8E0Q128lQ7w/rrfvzePQt5xD3e+tmmR17Qm6/SP88RdZiNDkcZe488bZoDDPSOfWrMiNmhlRVlcu8KaGfz+0SLYw==',key_name='tempest-keypair-731506490',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='97e21ffeec1c4428ba3d70499fc3281f',ramdisk_id='',reservation_id='r-0mavx5gw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-2097692874',owner_user_name='tempest-ServerActionsTestJSON-2097692874-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:29:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11288fa7771048b4a8faf1d6485ab059',uuid=f238e71a-660e-497c-8472-193245387bcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.709 189613 DEBUG nova.network.os_vif_util [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Converting VIF {"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.710 189613 DEBUG nova.network.os_vif_util [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.710 189613 DEBUG os_vif [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.711 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.711 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.712 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.715 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.716 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdd48bd9-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.716 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdd48bd9-f9, col_values=(('external_ids', {'iface-id': 'fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:76:1e', 'vm-uuid': 'f238e71a-660e-497c-8472-193245387bcf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.721 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:30:05 compute-0 NetworkManager[56413]: <info>  [1764023405.7227] manager: (tapfdd48bd9-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.737 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.738 189613 INFO os_vif [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9')
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.773 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.774 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.785 189613 DEBUG nova.virt.hardware [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.785 189613 INFO nova.compute.claims [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.824 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.825 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.825 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] No VIF found with MAC fa:16:3e:40:76:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:30:05 compute-0 nova_compute[189608]: 2025-11-24 22:30:05.825 189613 INFO nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Using config drive
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.069 189613 DEBUG nova.compute.provider_tree [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.082 189613 DEBUG nova.scheduler.client.report [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.111 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.112 189613 DEBUG nova.compute.manager [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.149 189613 INFO nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Creating config drive at /var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk.config
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.161 189613 DEBUG oslo_concurrency.processutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8bjkgj8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.199 189613 DEBUG nova.compute.manager [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.200 189613 DEBUG nova.network.neutron [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.219 189613 INFO nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.243 189613 DEBUG nova.compute.manager [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.303 189613 INFO nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Creating config drive at /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk.config
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.318 189613 DEBUG oslo_concurrency.processutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxiocid3f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.344 189613 DEBUG oslo_concurrency.processutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8bjkgj8" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.347 189613 DEBUG nova.compute.manager [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.349 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.350 189613 INFO nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Creating image(s)
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.350 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "/var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.351 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "/var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.352 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "/var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.352 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "e3114b07aff678ef05dd12aafd3a42953942e41b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.353 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "e3114b07aff678ef05dd12aafd3a42953942e41b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:06 compute-0 kernel: tap05ab28a1-e0: entered promiscuous mode
Nov 24 22:30:06 compute-0 NetworkManager[56413]: <info>  [1764023406.4621] manager: (tap05ab28a1-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.458 189613 DEBUG oslo_concurrency.processutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxiocid3f" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:06 compute-0 ovn_controller[97889]: 2025-11-24T22:30:06Z|00086|binding|INFO|Claiming lport 05ab28a1-e08f-4aa8-83d6-671fd5720283 for this chassis.
Nov 24 22:30:06 compute-0 ovn_controller[97889]: 2025-11-24T22:30:06Z|00087|binding|INFO|05ab28a1-e08f-4aa8-83d6-671fd5720283: Claiming fa:16:3e:ec:6d:17 10.100.0.3
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.477 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.491 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:6d:17 10.100.0.3'], port_security=['fa:16:3e:ec:6d:17 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '57d86171-0790-4408-ae34-dfc07ee52747', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f12ed9ff-32cf-41a2-a508-d96ae5468fa1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '47b8cf2705154817a1a23039debe2ac1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b95a23a-48ea-4462-90d4-e5d4f2776eec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f70cc3f-b6be-42ce-a39a-7f66ee0c1b99, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=05ab28a1-e08f-4aa8-83d6-671fd5720283) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.493 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 05ab28a1-e08f-4aa8-83d6-671fd5720283 in datapath f12ed9ff-32cf-41a2-a508-d96ae5468fa1 bound to our chassis
Nov 24 22:30:06 compute-0 ovn_controller[97889]: 2025-11-24T22:30:06Z|00088|binding|INFO|Setting lport 05ab28a1-e08f-4aa8-83d6-671fd5720283 ovn-installed in OVS
Nov 24 22:30:06 compute-0 ovn_controller[97889]: 2025-11-24T22:30:06Z|00089|binding|INFO|Setting lport 05ab28a1-e08f-4aa8-83d6-671fd5720283 up in Southbound
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.501 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.505 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.503 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f12ed9ff-32cf-41a2-a508-d96ae5468fa1
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.522 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[357d4281-c897-4967-bff4-614bbd8977f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.523 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf12ed9ff-31 in ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.528 240020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf12ed9ff-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.528 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[034e3667-3faf-4ea8-9d6a-00ada87c9527]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.529 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae1825b-ca91-44f2-b5d7-b5dc41019f6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 systemd-udevd[252119]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.547 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[c4fcf154-8ebf-4a36-a0e6-49e190520e8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 NetworkManager[56413]: <info>  [1764023406.5546] device (tap05ab28a1-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:30:06 compute-0 NetworkManager[56413]: <info>  [1764023406.5558] device (tap05ab28a1-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.578 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[cded80df-8ff3-40ef-818f-a2c9ddbffb8c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 systemd-machined[155884]: New machine qemu-8-instance-00000008.
Nov 24 22:30:06 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Nov 24 22:30:06 compute-0 kernel: tapfdd48bd9-f9: entered promiscuous mode
Nov 24 22:30:06 compute-0 NetworkManager[56413]: <info>  [1764023406.5982] manager: (tapfdd48bd9-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Nov 24 22:30:06 compute-0 ovn_controller[97889]: 2025-11-24T22:30:06Z|00090|binding|INFO|Claiming lport fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 for this chassis.
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.599 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:06 compute-0 ovn_controller[97889]: 2025-11-24T22:30:06Z|00091|binding|INFO|fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13: Claiming fa:16:3e:40:76:1e 10.100.0.12
Nov 24 22:30:06 compute-0 podman[252095]: 2025-11-24 22:30:06.606249133 +0000 UTC m=+0.154598495 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:30:06 compute-0 sshd-session[252077]: Invalid user cirros from 185.217.1.246 port 9063
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.618 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.616 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:76:1e 10.100.0.12'], port_security=['fa:16:3e:40:76:1e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f238e71a-660e-497c-8472-193245387bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '97e21ffeec1c4428ba3d70499fc3281f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f7d22eb6-0a82-485c-96cc-cd31ea984470', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a063a9f4-1c3d-438a-9e7c-e5a5c01b330e, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:30:06 compute-0 ovn_controller[97889]: 2025-11-24T22:30:06Z|00092|binding|INFO|Setting lport fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 ovn-installed in OVS
Nov 24 22:30:06 compute-0 ovn_controller[97889]: 2025-11-24T22:30:06Z|00093|binding|INFO|Setting lport fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 up in Southbound
Nov 24 22:30:06 compute-0 NetworkManager[56413]: <info>  [1764023406.6245] device (tapfdd48bd9-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:30:06 compute-0 NetworkManager[56413]: <info>  [1764023406.6257] device (tapfdd48bd9-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.628 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.632 189613 DEBUG nova.policy [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4a6957a775da42c9b535753d6b0279d6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.634 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3f8b0c-f906-402d-97c4-6e02ccf9e1d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.642 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[21be1514-aa08-4279-aaec-970a2b038ab2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 NetworkManager[56413]: <info>  [1764023406.6433] manager: (tapf12ed9ff-30): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Nov 24 22:30:06 compute-0 systemd-machined[155884]: New machine qemu-9-instance-00000009.
Nov 24 22:30:06 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.682 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[92808711-391a-4f74-832c-40b2baf8b43f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.685 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[c5824e40-a4c5-46c1-abac-dae5df327c84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 NetworkManager[56413]: <info>  [1764023406.7140] device (tapf12ed9ff-30): carrier: link connected
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.721 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[eba5a9ea-32e7-4e9e-849d-97dd3b7c3b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.743 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[1a16a52f-bc57-4a04-a5a6-6076997f9b76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf12ed9ff-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:bf:03'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525420, 'reachable_time': 42287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252181, 'error': None, 'target': 'ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.764 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[b9998e3c-f07e-4ffc-93a0-b6177bf9d2af]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:bf03'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525420, 'tstamp': 525420}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252183, 'error': None, 'target': 'ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.791 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[2165903d-c78b-4c47-b812-d3c89bde7bf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf12ed9ff-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:bf:03'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525420, 'reachable_time': 42287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252184, 'error': None, 'target': 'ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.840 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b48837-5a80-4ecd-87a2-f32191341b64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.933 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[75c90cb7-09d9-44be-8d87-58863031c9de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.935 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf12ed9ff-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.935 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.936 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf12ed9ff-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:06 compute-0 kernel: tapf12ed9ff-30: entered promiscuous mode
Nov 24 22:30:06 compute-0 NetworkManager[56413]: <info>  [1764023406.9394] manager: (tapf12ed9ff-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.943 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf12ed9ff-30, col_values=(('external_ids', {'iface-id': '194093c7-0709-4152-be40-3515887108e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.938 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:06 compute-0 ovn_controller[97889]: 2025-11-24T22:30:06Z|00094|binding|INFO|Releasing lport 194093c7-0709-4152-be40-3515887108e2 from this chassis (sb_readonly=0)
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.954 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.954 106776 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f12ed9ff-32cf-41a2-a508-d96ae5468fa1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f12ed9ff-32cf-41a2-a508-d96ae5468fa1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.956 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa5bc07-05ad-446c-8723-38755d3530f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.957 106776 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: global
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     log         /dev/log local0 debug
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     log-tag     haproxy-metadata-proxy-f12ed9ff-32cf-41a2-a508-d96ae5468fa1
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     user        root
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     group       root
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     maxconn     1024
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     pidfile     /var/lib/neutron/external/pids/f12ed9ff-32cf-41a2-a508-d96ae5468fa1.pid.haproxy
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     daemon
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: defaults
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     log global
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     mode http
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     option httplog
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     option dontlognull
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     option http-server-close
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     option forwardfor
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     retries                 3
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     timeout http-request    30s
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     timeout connect         30s
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     timeout client          32s
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     timeout server          32s
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     timeout http-keep-alive 30s
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: listen listener
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     bind 169.254.169.254:80
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:     http-request add-header X-OVN-Network-ID f12ed9ff-32cf-41a2-a508-d96ae5468fa1
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 22:30:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:06.958 106776 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1', 'env', 'PROCESS_TAG=haproxy-f12ed9ff-32cf-41a2-a508-d96ae5468fa1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f12ed9ff-32cf-41a2-a508-d96ae5468fa1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 22:30:06 compute-0 nova_compute[189608]: 2025-11-24 22:30:06.966 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.055 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023407.0550501, 57d86171-0790-4408-ae34-dfc07ee52747 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.056 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] VM Started (Lifecycle Event)
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.078 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.087 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023407.0552168, 57d86171-0790-4408-ae34-dfc07ee52747 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.089 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] VM Paused (Lifecycle Event)
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.109 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.119 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.166 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.166 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023407.1305835, f238e71a-660e-497c-8472-193245387bcf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.167 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] VM Started (Lifecycle Event)
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.283 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.350 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023407.1306877, f238e71a-660e-497c-8472-193245387bcf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.350 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] VM Paused (Lifecycle Event)
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.370 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.375 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.400 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:30:07 compute-0 podman[252228]: 2025-11-24 22:30:07.497029042 +0000 UTC m=+0.102832726 container create 2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 22:30:07 compute-0 podman[252228]: 2025-11-24 22:30:07.433931232 +0000 UTC m=+0.039734886 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 22:30:07 compute-0 systemd[1]: Started libpod-conmon-2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d.scope.
Nov 24 22:30:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.603 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3deecc54c960d81e76e6bd51ce45d85c7e71e60b1e6510423000e18bd54df2cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 22:30:07 compute-0 podman[252228]: 2025-11-24 22:30:07.646936559 +0000 UTC m=+0.252740213 container init 2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 24 22:30:07 compute-0 podman[252228]: 2025-11-24 22:30:07.675034153 +0000 UTC m=+0.280837797 container start 2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.708 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b.part --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.710 189613 DEBUG nova.virt.images [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] ea88776c-3c0b-4e74-99b4-08aadc81390f was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.712 189613 DEBUG nova.privsep.utils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 24 22:30:07 compute-0 nova_compute[189608]: 2025-11-24 22:30:07.713 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b.part /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:07 compute-0 neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1[252244]: [NOTICE]   (252249) : New worker (252254) forked
Nov 24 22:30:07 compute-0 neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1[252244]: [NOTICE]   (252249) : Loading success.
Nov 24 22:30:07 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:07.768 106776 INFO neutron.agent.ovn.metadata.agent [-] Port fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 in datapath 29585b3c-5eec-4652-ae2f-4aa9ec19d924 unbound from our chassis
Nov 24 22:30:07 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:07.775 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 29585b3c-5eec-4652-ae2f-4aa9ec19d924
Nov 24 22:30:07 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:07.790 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[51d07aa4-d60b-4e6f-826e-e3b8eef1d2e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:07 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:07.792 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap29585b3c-51 in ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 22:30:07 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:07.795 240020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap29585b3c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 22:30:07 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:07.796 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[7cec70cd-0893-4c6f-b323-6167e6427f65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:07 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:07.800 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9ce0ef-7675-4ca3-a8e3-dddf8068747e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:07 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:07.815 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[ecfb9a1b-0dc6-4e14-ba70-9e5b55f84498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:07 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:07.847 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[1bccbb9d-2830-4e83-be2c-854d693bcd44]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:07 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:07.899 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[508e4d43-1450-450c-b90e-dfda1515a40a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:07 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:07.910 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f24d66-fb98-4909-90d5-01995834731b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:07 compute-0 NetworkManager[56413]: <info>  [1764023407.9125] manager: (tap29585b3c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Nov 24 22:30:07 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:07.967 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[d998f47b-a7d1-4240-8c16-b1cf66a4ba91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:07 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:07.974 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[68b250ba-7a09-49b3-9c9f-cc07867c1f1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:08 compute-0 NetworkManager[56413]: <info>  [1764023408.0160] device (tap29585b3c-50): carrier: link connected
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.018 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b.part /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b.converted" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.028 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.027 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[f73ef3dd-03b9-4401-81d4-b778c2cf5c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.065 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[da40e5d5-fb66-4ab3-9676-6f7e876cdca5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap29585b3c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:b9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525550, 'reachable_time': 26251, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252279, 'error': None, 'target': 'ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.084 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[9809da2e-2ffe-4f08-b0ef-2fdde17f05c7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe75:b92f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525550, 'tstamp': 525550}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252281, 'error': None, 'target': 'ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.117 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[5d27b0a7-e05f-4740-9e2a-5dc8e9385590]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap29585b3c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:b9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525550, 'reachable_time': 26251, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252282, 'error': None, 'target': 'ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.132 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b.converted --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.136 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "e3114b07aff678ef05dd12aafd3a42953942e41b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.166 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.176 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[9da3304c-56d3-4bac-bf2a-403ca3e505ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.242 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.244 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "e3114b07aff678ef05dd12aafd3a42953942e41b" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.246 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "e3114b07aff678ef05dd12aafd3a42953942e41b" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:08 compute-0 sshd-session[252077]: Disconnecting invalid user cirros 185.217.1.246 port 9063: Change of username or service not allowed: (cirros,ssh-connection) -> (,ssh-connection) [preauth]
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.272 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.279 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[50b6ca12-ab04-4a97-8bad-83a689fa15e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.281 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29585b3c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.282 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.283 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29585b3c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:08 compute-0 kernel: tap29585b3c-50: entered promiscuous mode
Nov 24 22:30:08 compute-0 NetworkManager[56413]: <info>  [1764023408.2870] manager: (tap29585b3c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.294 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap29585b3c-50, col_values=(('external_ids', {'iface-id': '7dcd4ddb-3860-49b9-87ed-1daf692defef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:08 compute-0 ovn_controller[97889]: 2025-11-24T22:30:08Z|00095|binding|INFO|Releasing lport 7dcd4ddb-3860-49b9-87ed-1daf692defef from this chassis (sb_readonly=0)
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.300 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.330 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.330 106776 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/29585b3c-5eec-4652-ae2f-4aa9ec19d924.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/29585b3c-5eec-4652-ae2f-4aa9ec19d924.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.332 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[6aba2e52-c56b-4814-bc34-a5aaa6bb1a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.334 106776 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: global
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     log         /dev/log local0 debug
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     log-tag     haproxy-metadata-proxy-29585b3c-5eec-4652-ae2f-4aa9ec19d924
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     user        root
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     group       root
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     maxconn     1024
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     pidfile     /var/lib/neutron/external/pids/29585b3c-5eec-4652-ae2f-4aa9ec19d924.pid.haproxy
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     daemon
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: defaults
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     log global
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     mode http
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     option httplog
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     option dontlognull
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     option http-server-close
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     option forwardfor
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     retries                 3
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     timeout http-request    30s
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     timeout connect         30s
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     timeout client          32s
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     timeout server          32s
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     timeout http-keep-alive 30s
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: listen listener
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     bind 169.254.169.254:80
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:     http-request add-header X-OVN-Network-ID 29585b3c-5eec-4652-ae2f-4aa9ec19d924
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 22:30:08 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:08.335 106776 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'env', 'PROCESS_TAG=haproxy-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/29585b3c-5eec-4652-ae2f-4aa9ec19d924.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.360 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.361 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b,backing_fmt=raw /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.437 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b,backing_fmt=raw /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk 1073741824" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.442 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "e3114b07aff678ef05dd12aafd3a42953942e41b" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.443 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.477 189613 DEBUG nova.network.neutron [req-7e52c72f-acc5-4cb9-9c55-6aa986cea52e req-3b42ef8b-bc88-4a52-8da7-9332fe65589d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Updated VIF entry in instance network info cache for port 05ab28a1-e08f-4aa8-83d6-671fd5720283. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.479 189613 DEBUG nova.network.neutron [req-7e52c72f-acc5-4cb9-9c55-6aa986cea52e req-3b42ef8b-bc88-4a52-8da7-9332fe65589d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Updating instance_info_cache with network_info: [{"id": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "address": "fa:16:3e:ec:6d:17", "network": {"id": "f12ed9ff-32cf-41a2-a508-d96ae5468fa1", "bridge": "br-int", "label": "tempest-ServersTestJSON-2010443651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47b8cf2705154817a1a23039debe2ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ab28a1-e0", "ovs_interfaceid": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.516 189613 DEBUG nova.compute.manager [req-d18a3da2-9842-4651-8c80-edd8b0945a9d req-2f3d394c-6f4a-4793-808f-4f2debb6f4b8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Received event network-vif-plugged-05ab28a1-e08f-4aa8-83d6-671fd5720283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.517 189613 DEBUG oslo_concurrency.lockutils [req-d18a3da2-9842-4651-8c80-edd8b0945a9d req-2f3d394c-6f4a-4793-808f-4f2debb6f4b8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "57d86171-0790-4408-ae34-dfc07ee52747-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.517 189613 DEBUG oslo_concurrency.lockutils [req-d18a3da2-9842-4651-8c80-edd8b0945a9d req-2f3d394c-6f4a-4793-808f-4f2debb6f4b8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "57d86171-0790-4408-ae34-dfc07ee52747-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.517 189613 DEBUG oslo_concurrency.lockutils [req-d18a3da2-9842-4651-8c80-edd8b0945a9d req-2f3d394c-6f4a-4793-808f-4f2debb6f4b8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "57d86171-0790-4408-ae34-dfc07ee52747-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.518 189613 DEBUG nova.compute.manager [req-d18a3da2-9842-4651-8c80-edd8b0945a9d req-2f3d394c-6f4a-4793-808f-4f2debb6f4b8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Processing event network-vif-plugged-05ab28a1-e08f-4aa8-83d6-671fd5720283 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.520 189613 DEBUG nova.compute.manager [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.522 189613 DEBUG oslo_concurrency.lockutils [req-7e52c72f-acc5-4cb9-9c55-6aa986cea52e req-3b42ef8b-bc88-4a52-8da7-9332fe65589d c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-57d86171-0790-4408-ae34-dfc07ee52747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.530 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.531 189613 DEBUG nova.virt.disk.api [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Checking if we can resize image /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.532 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.562 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023408.547482, 57d86171-0790-4408-ae34-dfc07ee52747 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.563 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] VM Resumed (Lifecycle Event)
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.572 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.583 189613 INFO nova.virt.libvirt.driver [-] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Instance spawned successfully.
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.583 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.607 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.625 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.632 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.633 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.634 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.634 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.635 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.636 189613 DEBUG nova.virt.libvirt.driver [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.644 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.645 189613 DEBUG nova.virt.disk.api [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Cannot resize image /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.645 189613 DEBUG nova.objects.instance [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lazy-loading 'migration_context' on Instance uuid a3bee9ba-6618-44bd-a443-da9fff6862a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.650 189613 DEBUG nova.network.neutron [req-d6cf747e-5b34-4c8f-9483-79a5debe84da req-9868d36d-68cf-44a8-b8b1-0ba331da8288 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Updated VIF entry in instance network info cache for port fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.651 189613 DEBUG nova.network.neutron [req-d6cf747e-5b34-4c8f-9483-79a5debe84da req-9868d36d-68cf-44a8-b8b1-0ba331da8288 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Updating instance_info_cache with network_info: [{"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.688 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.697 189613 DEBUG oslo_concurrency.lockutils [req-d6cf747e-5b34-4c8f-9483-79a5debe84da req-9868d36d-68cf-44a8-b8b1-0ba331da8288 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.700 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.700 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Ensure instance console log exists: /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.701 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.701 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.701 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.736 189613 DEBUG nova.network.neutron [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Successfully updated port: 8f00051e-bd87-48eb-aba6-5dbf3d527aef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.757 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "refresh_cache-cf45f1e3-b80d-4213-80aa-995f57a9a476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.758 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquired lock "refresh_cache-cf45f1e3-b80d-4213-80aa-995f57a9a476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.758 189613 DEBUG nova.network.neutron [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.767 189613 INFO nova.compute.manager [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Took 9.93 seconds to spawn the instance on the hypervisor.
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.767 189613 DEBUG nova.compute.manager [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.849 189613 INFO nova.compute.manager [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Took 10.65 seconds to build instance.
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.866 189613 DEBUG oslo_concurrency.lockutils [None req-ba359e8b-75de-45e1-b933-fb8338df6f2a 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "57d86171-0790-4408-ae34-dfc07ee52747" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:08 compute-0 podman[252331]: 2025-11-24 22:30:08.896902059 +0000 UTC m=+0.094623281 container create 648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 24 22:30:08 compute-0 podman[252331]: 2025-11-24 22:30:08.84897686 +0000 UTC m=+0.046698102 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 22:30:08 compute-0 systemd[1]: Started libpod-conmon-648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b.scope.
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.966 189613 DEBUG nova.compute.manager [req-bea5cdc6-6beb-480a-9367-f182f352eede req-24c9bd82-f52a-4f26-b35c-ad7a3391ff38 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Received event network-changed-8f00051e-bd87-48eb-aba6-5dbf3d527aef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.967 189613 DEBUG nova.compute.manager [req-bea5cdc6-6beb-480a-9367-f182f352eede req-24c9bd82-f52a-4f26-b35c-ad7a3391ff38 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Refreshing instance network info cache due to event network-changed-8f00051e-bd87-48eb-aba6-5dbf3d527aef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:30:08 compute-0 nova_compute[189608]: 2025-11-24 22:30:08.967 189613 DEBUG oslo_concurrency.lockutils [req-bea5cdc6-6beb-480a-9367-f182f352eede req-24c9bd82-f52a-4f26-b35c-ad7a3391ff38 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-cf45f1e3-b80d-4213-80aa-995f57a9a476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 22:30:09 compute-0 nova_compute[189608]: 2025-11-24 22:30:09.015 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ed94dc02eadc742cf871717cf4f4e6bb56b5b461328eb566351902d7da9122/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 22:30:09 compute-0 nova_compute[189608]: 2025-11-24 22:30:09.055 189613 DEBUG nova.network.neutron [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:30:09 compute-0 podman[252331]: 2025-11-24 22:30:09.060635906 +0000 UTC m=+0.258357148 container init 648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:30:09 compute-0 nova_compute[189608]: 2025-11-24 22:30:09.072 189613 DEBUG nova.network.neutron [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Successfully created port: 5efccbc3-b2bb-4d9d-ba64-9382a4b2487b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 22:30:09 compute-0 podman[252331]: 2025-11-24 22:30:09.076692176 +0000 UTC m=+0.274413388 container start 648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:30:09 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[252346]: [NOTICE]   (252350) : New worker (252352) forked
Nov 24 22:30:09 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[252346]: [NOTICE]   (252350) : Loading success.
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.661 189613 DEBUG nova.network.neutron [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Updating instance_info_cache with network_info: [{"id": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "address": "fa:16:3e:d2:89:1f", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f00051e-bd", "ovs_interfaceid": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.685 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Releasing lock "refresh_cache-cf45f1e3-b80d-4213-80aa-995f57a9a476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.686 189613 DEBUG nova.compute.manager [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Instance network_info: |[{"id": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "address": "fa:16:3e:d2:89:1f", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f00051e-bd", "ovs_interfaceid": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.687 189613 DEBUG oslo_concurrency.lockutils [req-bea5cdc6-6beb-480a-9367-f182f352eede req-24c9bd82-f52a-4f26-b35c-ad7a3391ff38 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-cf45f1e3-b80d-4213-80aa-995f57a9a476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.687 189613 DEBUG nova.network.neutron [req-bea5cdc6-6beb-480a-9367-f182f352eede req-24c9bd82-f52a-4f26-b35c-ad7a3391ff38 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Refreshing network info cache for port 8f00051e-bd87-48eb-aba6-5dbf3d527aef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.692 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Start _get_guest_xml network_info=[{"id": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "address": "fa:16:3e:d2:89:1f", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f00051e-bd", "ovs_interfaceid": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.703 189613 WARNING nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.719 189613 DEBUG nova.virt.libvirt.host [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.720 189613 DEBUG nova.virt.libvirt.host [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.723 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.728 189613 DEBUG nova.virt.libvirt.host [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.728 189613 DEBUG nova.virt.libvirt.host [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.729 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.730 189613 DEBUG nova.virt.hardware [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:28:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a49f1e6c-1051-4dea-812e-0063121444a0',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.731 189613 DEBUG nova.virt.hardware [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.732 189613 DEBUG nova.virt.hardware [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.732 189613 DEBUG nova.virt.hardware [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.732 189613 DEBUG nova.virt.hardware [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.733 189613 DEBUG nova.virt.hardware [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.733 189613 DEBUG nova.virt.hardware [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.733 189613 DEBUG nova.virt.hardware [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.734 189613 DEBUG nova.virt.hardware [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.734 189613 DEBUG nova.virt.hardware [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.734 189613 DEBUG nova.virt.hardware [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.740 189613 DEBUG nova.virt.libvirt.vif [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:30:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-501882066',display_name='tempest-TestNetworkBasicOps-server-501882066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-501882066',id=10,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPlZjxuKSGTmxgVDchYk+GJcLGRXvs9CnKpiEnZ/PwqWKNeGx51EqI/uX1m3Drik1zAThCC+0gOJLoaHRaz7LgOa+K81EwBXRWqudbIpt61K0/Cg/CZImZCe2iCDs0sZJg==',key_name='tempest-TestNetworkBasicOps-1667007681',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ac27d3d1c734f4bab455262f79d3106',ramdisk_id='',reservation_id='r-hxvr2jjm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488656933',owner_user_name='tempest-TestNetworkBasicOps-488656933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:30:01Z,user_data=None,user_id='1599850e48894151b7909b89547cd9e2',uuid=cf45f1e3-b80d-4213-80aa-995f57a9a476,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "address": "fa:16:3e:d2:89:1f", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f00051e-bd", "ovs_interfaceid": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.741 189613 DEBUG nova.network.os_vif_util [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Converting VIF {"id": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "address": "fa:16:3e:d2:89:1f", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f00051e-bd", "ovs_interfaceid": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.742 189613 DEBUG nova.network.os_vif_util [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:89:1f,bridge_name='br-int',has_traffic_filtering=True,id=8f00051e-bd87-48eb-aba6-5dbf3d527aef,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f00051e-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.744 189613 DEBUG nova.objects.instance [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf45f1e3-b80d-4213-80aa-995f57a9a476 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.767 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:30:10 compute-0 nova_compute[189608]:   <uuid>cf45f1e3-b80d-4213-80aa-995f57a9a476</uuid>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   <name>instance-0000000a</name>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   <memory>131072</memory>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <nova:name>tempest-TestNetworkBasicOps-server-501882066</nova:name>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:30:10</nova:creationTime>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <nova:flavor name="m1.nano">
Nov 24 22:30:10 compute-0 nova_compute[189608]:         <nova:memory>128</nova:memory>
Nov 24 22:30:10 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:30:10 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:30:10 compute-0 nova_compute[189608]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 22:30:10 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:30:10 compute-0 nova_compute[189608]:         <nova:user uuid="1599850e48894151b7909b89547cd9e2">tempest-TestNetworkBasicOps-488656933-project-member</nova:user>
Nov 24 22:30:10 compute-0 nova_compute[189608]:         <nova:project uuid="4ac27d3d1c734f4bab455262f79d3106">tempest-TestNetworkBasicOps-488656933</nova:project>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="ec71d7d5-c197-4331-bf8d-e2de71a8419f"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:30:10 compute-0 nova_compute[189608]:         <nova:port uuid="8f00051e-bd87-48eb-aba6-5dbf3d527aef">
Nov 24 22:30:10 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <system>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <entry name="serial">cf45f1e3-b80d-4213-80aa-995f57a9a476</entry>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <entry name="uuid">cf45f1e3-b80d-4213-80aa-995f57a9a476</entry>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     </system>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   <os>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   </os>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   <features>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   </features>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.config"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:d2:89:1f"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <target dev="tap8f00051e-bd"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/console.log" append="off"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <video>
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     </video>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:30:10 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:30:10 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:30:10 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:30:10 compute-0 nova_compute[189608]: </domain>
Nov 24 22:30:10 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.768 189613 DEBUG nova.compute.manager [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Preparing to wait for external event network-vif-plugged-8f00051e-bd87-48eb-aba6-5dbf3d527aef prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.768 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.768 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.769 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.769 189613 DEBUG nova.virt.libvirt.vif [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:30:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-501882066',display_name='tempest-TestNetworkBasicOps-server-501882066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-501882066',id=10,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPlZjxuKSGTmxgVDchYk+GJcLGRXvs9CnKpiEnZ/PwqWKNeGx51EqI/uX1m3Drik1zAThCC+0gOJLoaHRaz7LgOa+K81EwBXRWqudbIpt61K0/Cg/CZImZCe2iCDs0sZJg==',key_name='tempest-TestNetworkBasicOps-1667007681',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ac27d3d1c734f4bab455262f79d3106',ramdisk_id='',reservation_id='r-hxvr2jjm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488656933',owner_user_name='tempest-TestNetworkBasicOps-488656933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:30:01Z,user_data=None,user_id='1599850e48894151b7909b89547cd9e2',uuid=cf45f1e3-b80d-4213-80aa-995f57a9a476,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "address": "fa:16:3e:d2:89:1f", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f00051e-bd", "ovs_interfaceid": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.770 189613 DEBUG nova.network.os_vif_util [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Converting VIF {"id": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "address": "fa:16:3e:d2:89:1f", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f00051e-bd", "ovs_interfaceid": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.771 189613 DEBUG nova.network.os_vif_util [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:89:1f,bridge_name='br-int',has_traffic_filtering=True,id=8f00051e-bd87-48eb-aba6-5dbf3d527aef,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f00051e-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.771 189613 DEBUG os_vif [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:89:1f,bridge_name='br-int',has_traffic_filtering=True,id=8f00051e-bd87-48eb-aba6-5dbf3d527aef,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f00051e-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.772 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.772 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.773 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.778 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.778 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f00051e-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.779 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f00051e-bd, col_values=(('external_ids', {'iface-id': '8f00051e-bd87-48eb-aba6-5dbf3d527aef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:89:1f', 'vm-uuid': 'cf45f1e3-b80d-4213-80aa-995f57a9a476'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:10 compute-0 NetworkManager[56413]: <info>  [1764023410.7836] manager: (tap8f00051e-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.781 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.786 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.794 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.795 189613 INFO os_vif [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:89:1f,bridge_name='br-int',has_traffic_filtering=True,id=8f00051e-bd87-48eb-aba6-5dbf3d527aef,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f00051e-bd')
Nov 24 22:30:10 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:10.805 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:30:10 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:10.806 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.809 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.880 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.880 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.880 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] No VIF found with MAC fa:16:3e:d2:89:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:30:10 compute-0 nova_compute[189608]: 2025-11-24 22:30:10.881 189613 INFO nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Using config drive
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.195 189613 DEBUG nova.compute.manager [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Received event network-vif-plugged-05ab28a1-e08f-4aa8-83d6-671fd5720283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.196 189613 DEBUG oslo_concurrency.lockutils [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "57d86171-0790-4408-ae34-dfc07ee52747-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.196 189613 DEBUG oslo_concurrency.lockutils [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "57d86171-0790-4408-ae34-dfc07ee52747-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.197 189613 DEBUG oslo_concurrency.lockutils [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "57d86171-0790-4408-ae34-dfc07ee52747-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.197 189613 DEBUG nova.compute.manager [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] No waiting events found dispatching network-vif-plugged-05ab28a1-e08f-4aa8-83d6-671fd5720283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.197 189613 WARNING nova.compute.manager [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Received unexpected event network-vif-plugged-05ab28a1-e08f-4aa8-83d6-671fd5720283 for instance with vm_state active and task_state None.
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.198 189613 DEBUG nova.compute.manager [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received event network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.198 189613 DEBUG oslo_concurrency.lockutils [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "f238e71a-660e-497c-8472-193245387bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.199 189613 DEBUG oslo_concurrency.lockutils [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.199 189613 DEBUG oslo_concurrency.lockutils [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.200 189613 DEBUG nova.compute.manager [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Processing event network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.201 189613 DEBUG nova.compute.manager [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received event network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.201 189613 DEBUG oslo_concurrency.lockutils [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "f238e71a-660e-497c-8472-193245387bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.201 189613 DEBUG oslo_concurrency.lockutils [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.202 189613 DEBUG oslo_concurrency.lockutils [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.202 189613 DEBUG nova.compute.manager [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] No waiting events found dispatching network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.202 189613 WARNING nova.compute.manager [req-6f2cea9a-bc93-40ca-8dc9-94dc2fd8a4b0 req-57a1ab5f-9f93-4eb2-98ef-58ef9e4963c5 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received unexpected event network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 for instance with vm_state building and task_state spawning.
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.203 189613 DEBUG nova.compute.manager [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.213 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.214 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023411.213269, f238e71a-660e-497c-8472-193245387bcf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.214 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] VM Resumed (Lifecycle Event)
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.229 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.232 189613 INFO nova.virt.libvirt.driver [-] [instance: f238e71a-660e-497c-8472-193245387bcf] Instance spawned successfully.
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.232 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.241 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.255 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.256 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.257 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.257 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.258 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.258 189613 DEBUG nova.virt.libvirt.driver [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.265 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.318 189613 INFO nova.compute.manager [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Took 11.77 seconds to spawn the instance on the hypervisor.
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.318 189613 DEBUG nova.compute.manager [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.547 189613 INFO nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Creating config drive at /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.config
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.552 189613 DEBUG oslo_concurrency.processutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvuc_x9y2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.585 189613 DEBUG nova.network.neutron [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Successfully updated port: 5efccbc3-b2bb-4d9d-ba64-9382a4b2487b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.590 189613 INFO nova.compute.manager [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Took 12.58 seconds to build instance.
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.602 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.602 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquired lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.603 189613 DEBUG nova.network.neutron [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.605 189613 DEBUG oslo_concurrency.lockutils [None req-f55fcb02-7683-41e7-a5d8-ff9f46c72752 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.699 189613 DEBUG oslo_concurrency.processutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvuc_x9y2" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:11 compute-0 ovn_controller[97889]: 2025-11-24T22:30:11Z|00096|memory|INFO|peak resident set size grew 51% in last 2632.4 seconds, from 16128 kB to 24388 kB
Nov 24 22:30:11 compute-0 ovn_controller[97889]: 2025-11-24T22:30:11Z|00097|memory|INFO|idl-cells-OVN_Southbound:11273 idl-cells-Open_vSwitch:984 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:380 lflow-cache-entries-cache-matches:295 lflow-cache-size-KB:1581 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:686 ofctrl_installed_flow_usage-KB:501 ofctrl_sb_flow_ref_usage-KB:257
Nov 24 22:30:11 compute-0 kernel: tap8f00051e-bd: entered promiscuous mode
Nov 24 22:30:11 compute-0 NetworkManager[56413]: <info>  [1764023411.8206] manager: (tap8f00051e-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Nov 24 22:30:11 compute-0 ovn_controller[97889]: 2025-11-24T22:30:11Z|00098|binding|INFO|Claiming lport 8f00051e-bd87-48eb-aba6-5dbf3d527aef for this chassis.
Nov 24 22:30:11 compute-0 ovn_controller[97889]: 2025-11-24T22:30:11Z|00099|binding|INFO|8f00051e-bd87-48eb-aba6-5dbf3d527aef: Claiming fa:16:3e:d2:89:1f 10.100.0.11
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.825 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:11.843 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:89:1f 10.100.0.11'], port_security=['fa:16:3e:d2:89:1f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cf45f1e3-b80d-4213-80aa-995f57a9a476', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c160915-cb1d-4981-a2c7-30899c389f1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac27d3d1c734f4bab455262f79d3106', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9be940b5-626b-4f2c-8cdc-0aa939d2b4a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5128d898-210e-40a7-b165-d9fdd3199b44, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=8f00051e-bd87-48eb-aba6-5dbf3d527aef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:30:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:11.846 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 8f00051e-bd87-48eb-aba6-5dbf3d527aef in datapath 6c160915-cb1d-4981-a2c7-30899c389f1d bound to our chassis
Nov 24 22:30:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:11.852 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c160915-cb1d-4981-a2c7-30899c389f1d
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.858 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:11 compute-0 ovn_controller[97889]: 2025-11-24T22:30:11Z|00100|binding|INFO|Setting lport 8f00051e-bd87-48eb-aba6-5dbf3d527aef ovn-installed in OVS
Nov 24 22:30:11 compute-0 ovn_controller[97889]: 2025-11-24T22:30:11Z|00101|binding|INFO|Setting lport 8f00051e-bd87-48eb-aba6-5dbf3d527aef up in Southbound
Nov 24 22:30:11 compute-0 nova_compute[189608]: 2025-11-24 22:30:11.869 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:11.875 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[874a0668-6b1d-4dc4-ae61-c6520192b253]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:11.877 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c160915-c1 in ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 22:30:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:11.880 240020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c160915-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 22:30:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:11.880 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5777f4-473a-426c-80ba-4ebcfaa01860]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:11.882 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5940cd-7ea1-4af5-982c-2b2bf8a68127]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:11 compute-0 systemd-machined[155884]: New machine qemu-10-instance-0000000a.
Nov 24 22:30:11 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Nov 24 22:30:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:11.904 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[8eeee1d4-cd61-45f8-b7bb-36a40946cab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:11 compute-0 systemd-udevd[252402]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:30:11 compute-0 NetworkManager[56413]: <info>  [1764023411.9378] device (tap8f00051e-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:30:11 compute-0 NetworkManager[56413]: <info>  [1764023411.9386] device (tap8f00051e-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:30:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:11.942 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[a31a5583-f25a-48e0-a23a-ba7da7b8730d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:11 compute-0 podman[252376]: 2025-11-24 22:30:11.947936932 +0000 UTC m=+0.146380740 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 24 22:30:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:11.979 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[7afdf93c-68d4-4ff4-bd43-e8cdc31ab561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:11 compute-0 systemd-udevd[252407]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:30:11 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:11.986 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb715ed-5f29-4c3b-af4c-3924cca4d7f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:11 compute-0 NetworkManager[56413]: <info>  [1764023411.9882] manager: (tap6c160915-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.035 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[14b44bc9-aabd-40c6-b7cf-524d8857257a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.039 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[fee04b78-1a17-435f-9a99-2f87c9a73f84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:12 compute-0 NetworkManager[56413]: <info>  [1764023412.0704] device (tap6c160915-c0): carrier: link connected
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.078 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[959a41c4-6357-403c-b1a7-0a6703e756c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.102 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[14f8d3b6-bde7-4c81-8dca-4d8bba475c9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c160915-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:57:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525956, 'reachable_time': 29113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252435, 'error': None, 'target': 'ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.127 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[58b26d63-aebc-4f65-801f-660900cb8df3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe72:57c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525956, 'tstamp': 525956}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252437, 'error': None, 'target': 'ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.154 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[68dce568-0663-481a-b88c-d79fba73e7b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c160915-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:57:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525956, 'reachable_time': 29113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252441, 'error': None, 'target': 'ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.202 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb22a14-56d3-411b-af67-75f304c38de1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.233 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.297 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023412.2956994, cf45f1e3-b80d-4213-80aa-995f57a9a476 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.297 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] VM Started (Lifecycle Event)
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.300 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[617345dc-a0ca-4747-bd85-7d3f62e127c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.302 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c160915-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.303 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.304 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c160915-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:12 compute-0 NetworkManager[56413]: <info>  [1764023412.3079] manager: (tap6c160915-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.308 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:12 compute-0 kernel: tap6c160915-c0: entered promiscuous mode
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.313 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.314 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c160915-c0, col_values=(('external_ids', {'iface-id': '5660e6d6-677f-4bf6-8ebf-40ac9c648155'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:12 compute-0 ovn_controller[97889]: 2025-11-24T22:30:12Z|00102|binding|INFO|Releasing lport 5660e6d6-677f-4bf6-8ebf-40ac9c648155 from this chassis (sb_readonly=0)
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.317 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.320 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.327 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023412.2959192, cf45f1e3-b80d-4213-80aa-995f57a9a476 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.328 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] VM Paused (Lifecycle Event)
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.348 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.350 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.352 106776 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c160915-cb1d-4981-a2c7-30899c389f1d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c160915-cb1d-4981-a2c7-30899c389f1d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.354 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[da3344db-c271-41fd-ac20-360ec207a529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.355 106776 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: global
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     log         /dev/log local0 debug
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     log-tag     haproxy-metadata-proxy-6c160915-cb1d-4981-a2c7-30899c389f1d
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     user        root
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     group       root
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     maxconn     1024
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     pidfile     /var/lib/neutron/external/pids/6c160915-cb1d-4981-a2c7-30899c389f1d.pid.haproxy
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     daemon
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: defaults
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     log global
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     mode http
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     option httplog
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     option dontlognull
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     option http-server-close
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     option forwardfor
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     retries                 3
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     timeout http-request    30s
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     timeout connect         30s
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     timeout client          32s
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     timeout server          32s
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     timeout http-keep-alive 30s
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: listen listener
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     bind 169.254.169.254:80
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:     http-request add-header X-OVN-Network-ID 6c160915-cb1d-4981-a2c7-30899c389f1d
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 22:30:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:12.356 106776 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d', 'env', 'PROCESS_TAG=haproxy-6c160915-cb1d-4981-a2c7-30899c389f1d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c160915-cb1d-4981-a2c7-30899c389f1d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.359 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.377 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.397 189613 DEBUG nova.network.neutron [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.725 189613 DEBUG nova.compute.manager [req-fb821067-005e-4751-a2df-103e356806f9 req-4f74b549-2a4b-4588-b5c9-470cc70b5b41 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Received event network-changed-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.726 189613 DEBUG nova.compute.manager [req-fb821067-005e-4751-a2df-103e356806f9 req-4f74b549-2a4b-4588-b5c9-470cc70b5b41 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Refreshing instance network info cache due to event network-changed-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:30:12 compute-0 nova_compute[189608]: 2025-11-24 22:30:12.726 189613 DEBUG oslo_concurrency.lockutils [req-fb821067-005e-4751-a2df-103e356806f9 req-4f74b549-2a4b-4588-b5c9-470cc70b5b41 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:12 compute-0 podman[252475]: 2025-11-24 22:30:12.938272463 +0000 UTC m=+0.115637173 container create 7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 22:30:12 compute-0 podman[252475]: 2025-11-24 22:30:12.878541408 +0000 UTC m=+0.055906118 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 22:30:13 compute-0 systemd[1]: Started libpod-conmon-7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17.scope.
Nov 24 22:30:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 22:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd13bd9b25c27a95077c93c5289af0a516146f8bc40d1dcf25c3c82451e63854/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 22:30:13 compute-0 podman[252475]: 2025-11-24 22:30:13.06396998 +0000 UTC m=+0.241334710 container init 7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:30:13 compute-0 podman[252475]: 2025-11-24 22:30:13.07591524 +0000 UTC m=+0.253279940 container start 7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 22:30:13 compute-0 neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d[252488]: [NOTICE]   (252492) : New worker (252494) forked
Nov 24 22:30:13 compute-0 neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d[252488]: [NOTICE]   (252492) : Loading success.
Nov 24 22:30:13 compute-0 nova_compute[189608]: 2025-11-24 22:30:13.924 189613 DEBUG nova.network.neutron [req-bea5cdc6-6beb-480a-9367-f182f352eede req-24c9bd82-f52a-4f26-b35c-ad7a3391ff38 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Updated VIF entry in instance network info cache for port 8f00051e-bd87-48eb-aba6-5dbf3d527aef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:30:13 compute-0 nova_compute[189608]: 2025-11-24 22:30:13.925 189613 DEBUG nova.network.neutron [req-bea5cdc6-6beb-480a-9367-f182f352eede req-24c9bd82-f52a-4f26-b35c-ad7a3391ff38 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Updating instance_info_cache with network_info: [{"id": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "address": "fa:16:3e:d2:89:1f", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f00051e-bd", "ovs_interfaceid": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:13 compute-0 nova_compute[189608]: 2025-11-24 22:30:13.943 189613 DEBUG oslo_concurrency.lockutils [req-bea5cdc6-6beb-480a-9367-f182f352eede req-24c9bd82-f52a-4f26-b35c-ad7a3391ff38 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-cf45f1e3-b80d-4213-80aa-995f57a9a476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.021 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.424 189613 DEBUG nova.network.neutron [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updating instance_info_cache with network_info: [{"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.457 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Releasing lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.458 189613 DEBUG nova.compute.manager [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Instance network_info: |[{"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.459 189613 DEBUG oslo_concurrency.lockutils [req-fb821067-005e-4751-a2df-103e356806f9 req-4f74b549-2a4b-4588-b5c9-470cc70b5b41 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.459 189613 DEBUG nova.network.neutron [req-fb821067-005e-4751-a2df-103e356806f9 req-4f74b549-2a4b-4588-b5c9-470cc70b5b41 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Refreshing network info cache for port 5efccbc3-b2bb-4d9d-ba64-9382a4b2487b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.463 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Start _get_guest_xml network_info=[{"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:29:48Z,direct_url=<?>,disk_format='qcow2',id=ea88776c-3c0b-4e74-99b4-08aadc81390f,min_disk=0,min_ram=0,name='tempest-scenario-img--1781237514',owner='4a6957a775da42c9b535753d6b0279d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:29:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.473 189613 WARNING nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.486 189613 DEBUG nova.virt.libvirt.host [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.488 189613 DEBUG nova.virt.libvirt.host [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.494 189613 DEBUG nova.virt.libvirt.host [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.495 189613 DEBUG nova.virt.libvirt.host [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.495 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.496 189613 DEBUG nova.virt.hardware [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:28:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a49f1e6c-1051-4dea-812e-0063121444a0',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:29:48Z,direct_url=<?>,disk_format='qcow2',id=ea88776c-3c0b-4e74-99b4-08aadc81390f,min_disk=0,min_ram=0,name='tempest-scenario-img--1781237514',owner='4a6957a775da42c9b535753d6b0279d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:29:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.497 189613 DEBUG nova.virt.hardware [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.497 189613 DEBUG nova.virt.hardware [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.498 189613 DEBUG nova.virt.hardware [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.498 189613 DEBUG nova.virt.hardware [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.498 189613 DEBUG nova.virt.hardware [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.499 189613 DEBUG nova.virt.hardware [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.499 189613 DEBUG nova.virt.hardware [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.500 189613 DEBUG nova.virt.hardware [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.500 189613 DEBUG nova.virt.hardware [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.501 189613 DEBUG nova.virt.hardware [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.506 189613 DEBUG nova.virt.libvirt.vif [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:30:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx',id=11,image_ref='ea88776c-3c0b-4e74-99b4-08aadc81390f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='c6477657-e9b0-476c-83b3-9dc474e946c6'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a6957a775da42c9b535753d6b0279d6',ramdisk_id='',reservation_id='r-nqqdpcc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ea88776c-3c0b-4e74-99b4-08aadc81390f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-332462970',owner_user_name='tempest-PrometheusGabbiTest-332462970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:30:06Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='fcf527fb124b42b9ab6a20cc0938b39f',uuid=a3bee9ba-6618-44bd-a443-da9fff6862a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.507 189613 DEBUG nova.network.os_vif_util [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Converting VIF {"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.508 189613 DEBUG nova.network.os_vif_util [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:6c:bb,bridge_name='br-int',has_traffic_filtering=True,id=5efccbc3-b2bb-4d9d-ba64-9382a4b2487b,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5efccbc3-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.510 189613 DEBUG nova.objects.instance [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lazy-loading 'pci_devices' on Instance uuid a3bee9ba-6618-44bd-a443-da9fff6862a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.530 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:30:14 compute-0 nova_compute[189608]:   <uuid>a3bee9ba-6618-44bd-a443-da9fff6862a9</uuid>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   <name>instance-0000000b</name>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   <memory>131072</memory>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <nova:name>te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx</nova:name>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:30:14</nova:creationTime>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <nova:flavor name="m1.nano">
Nov 24 22:30:14 compute-0 nova_compute[189608]:         <nova:memory>128</nova:memory>
Nov 24 22:30:14 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:30:14 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:30:14 compute-0 nova_compute[189608]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 22:30:14 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:30:14 compute-0 nova_compute[189608]:         <nova:user uuid="fcf527fb124b42b9ab6a20cc0938b39f">tempest-PrometheusGabbiTest-332462970-project-member</nova:user>
Nov 24 22:30:14 compute-0 nova_compute[189608]:         <nova:project uuid="4a6957a775da42c9b535753d6b0279d6">tempest-PrometheusGabbiTest-332462970</nova:project>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="ea88776c-3c0b-4e74-99b4-08aadc81390f"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:30:14 compute-0 nova_compute[189608]:         <nova:port uuid="5efccbc3-b2bb-4d9d-ba64-9382a4b2487b">
Nov 24 22:30:14 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="10.100.0.88" ipVersion="4"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <system>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <entry name="serial">a3bee9ba-6618-44bd-a443-da9fff6862a9</entry>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <entry name="uuid">a3bee9ba-6618-44bd-a443-da9fff6862a9</entry>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     </system>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   <os>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   </os>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   <features>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   </features>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.config"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:40:6c:bb"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <target dev="tap5efccbc3-b2"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/console.log" append="off"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <video>
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     </video>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:30:14 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:30:14 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:30:14 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:30:14 compute-0 nova_compute[189608]: </domain>
Nov 24 22:30:14 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.531 189613 DEBUG nova.compute.manager [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Preparing to wait for external event network-vif-plugged-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.531 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.532 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.533 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.534 189613 DEBUG nova.virt.libvirt.vif [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:30:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx',id=11,image_ref='ea88776c-3c0b-4e74-99b4-08aadc81390f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='c6477657-e9b0-476c-83b3-9dc474e946c6'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a6957a775da42c9b535753d6b0279d6',ramdisk_id='',reservation_id='r-nqqdpcc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ea88776c-3c0b-4e74-99b4-08aadc81390f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-332462970',owner_user_name='tempest-PrometheusGabbiTest-332462970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:30:06Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='fcf527fb124b42b9ab6a20cc0938b39f',uuid=a3bee9ba-6618-44bd-a443-da9fff6862a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.534 189613 DEBUG nova.network.os_vif_util [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Converting VIF {"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.535 189613 DEBUG nova.network.os_vif_util [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:6c:bb,bridge_name='br-int',has_traffic_filtering=True,id=5efccbc3-b2bb-4d9d-ba64-9382a4b2487b,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5efccbc3-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.536 189613 DEBUG os_vif [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:6c:bb,bridge_name='br-int',has_traffic_filtering=True,id=5efccbc3-b2bb-4d9d-ba64-9382a4b2487b,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5efccbc3-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.536 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.537 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.538 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.542 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.542 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5efccbc3-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.543 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5efccbc3-b2, col_values=(('external_ids', {'iface-id': '5efccbc3-b2bb-4d9d-ba64-9382a4b2487b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:6c:bb', 'vm-uuid': 'a3bee9ba-6618-44bd-a443-da9fff6862a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:14 compute-0 NetworkManager[56413]: <info>  [1764023414.5475] manager: (tap5efccbc3-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.548 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.560 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.563 189613 INFO os_vif [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:6c:bb,bridge_name='br-int',has_traffic_filtering=True,id=5efccbc3-b2bb-4d9d-ba64-9382a4b2487b,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5efccbc3-b2')
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.634 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.654 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.654 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] No VIF found with MAC fa:16:3e:40:6c:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:30:14 compute-0 nova_compute[189608]: 2025-11-24 22:30:14.656 189613 INFO nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Using config drive
Nov 24 22:30:14 compute-0 podman[252505]: 2025-11-24 22:30:14.745205279 +0000 UTC m=+0.132404265 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 24 22:30:15 compute-0 sshd-session[252361]: Invalid user  from 185.217.1.246 port 31831
Nov 24 22:30:16 compute-0 nova_compute[189608]: 2025-11-24 22:30:16.218 189613 DEBUG nova.compute.manager [req-ae972016-4437-47ee-97c4-4c2418f4af63 req-68b5d36e-7d66-4bc0-80d5-982272e8e1af c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Received event network-changed-05ab28a1-e08f-4aa8-83d6-671fd5720283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:16 compute-0 nova_compute[189608]: 2025-11-24 22:30:16.218 189613 DEBUG nova.compute.manager [req-ae972016-4437-47ee-97c4-4c2418f4af63 req-68b5d36e-7d66-4bc0-80d5-982272e8e1af c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Refreshing instance network info cache due to event network-changed-05ab28a1-e08f-4aa8-83d6-671fd5720283. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:30:16 compute-0 nova_compute[189608]: 2025-11-24 22:30:16.218 189613 DEBUG oslo_concurrency.lockutils [req-ae972016-4437-47ee-97c4-4c2418f4af63 req-68b5d36e-7d66-4bc0-80d5-982272e8e1af c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-57d86171-0790-4408-ae34-dfc07ee52747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:16 compute-0 nova_compute[189608]: 2025-11-24 22:30:16.219 189613 DEBUG oslo_concurrency.lockutils [req-ae972016-4437-47ee-97c4-4c2418f4af63 req-68b5d36e-7d66-4bc0-80d5-982272e8e1af c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-57d86171-0790-4408-ae34-dfc07ee52747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:16 compute-0 nova_compute[189608]: 2025-11-24 22:30:16.219 189613 DEBUG nova.network.neutron [req-ae972016-4437-47ee-97c4-4c2418f4af63 req-68b5d36e-7d66-4bc0-80d5-982272e8e1af c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Refreshing network info cache for port 05ab28a1-e08f-4aa8-83d6-671fd5720283 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:30:16 compute-0 nova_compute[189608]: 2025-11-24 22:30:16.364 189613 INFO nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Creating config drive at /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.config
Nov 24 22:30:16 compute-0 nova_compute[189608]: 2025-11-24 22:30:16.398 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7l7xqxl2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:16 compute-0 sshd-session[252361]: Disconnecting invalid user  185.217.1.246 port 31831: Change of username or service not allowed: (,ssh-connection) -> (api,ssh-connection) [preauth]
Nov 24 22:30:16 compute-0 nova_compute[189608]: 2025-11-24 22:30:16.543 189613 DEBUG oslo_concurrency.processutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7l7xqxl2" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:16 compute-0 kernel: tap5efccbc3-b2: entered promiscuous mode
Nov 24 22:30:16 compute-0 NetworkManager[56413]: <info>  [1764023416.6431] manager: (tap5efccbc3-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Nov 24 22:30:16 compute-0 nova_compute[189608]: 2025-11-24 22:30:16.645 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:16 compute-0 ovn_controller[97889]: 2025-11-24T22:30:16Z|00103|binding|INFO|Claiming lport 5efccbc3-b2bb-4d9d-ba64-9382a4b2487b for this chassis.
Nov 24 22:30:16 compute-0 ovn_controller[97889]: 2025-11-24T22:30:16Z|00104|binding|INFO|5efccbc3-b2bb-4d9d-ba64-9382a4b2487b: Claiming fa:16:3e:40:6c:bb 10.100.0.88
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.665 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:6c:bb 10.100.0.88'], port_security=['fa:16:3e:40:6c:bb 10.100.0.88'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.88/16', 'neutron:device_id': 'a3bee9ba-6618-44bd-a443-da9fff6862a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a6957a775da42c9b535753d6b0279d6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24045f91-3265-40cf-b7b6-d2589223975b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3186cdf0-d894-4e3e-a84d-b369c1fcfb08, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=5efccbc3-b2bb-4d9d-ba64-9382a4b2487b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.667 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 5efccbc3-b2bb-4d9d-ba64-9382a4b2487b in datapath a164481b-21c8-4cae-a6e9-b470d8a55a1f bound to our chassis
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.672 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a164481b-21c8-4cae-a6e9-b470d8a55a1f
Nov 24 22:30:16 compute-0 nova_compute[189608]: 2025-11-24 22:30:16.705 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.703 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5d1280-4ca3-4ca7-baff-b29c4f52e2b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.708 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa164481b-21 in ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.711 240020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa164481b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.711 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[a133f253-cf63-4ada-b0e1-8eee19c3f8a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:16 compute-0 systemd-udevd[252543]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:30:16 compute-0 systemd-machined[155884]: New machine qemu-11-instance-0000000b.
Nov 24 22:30:16 compute-0 ovn_controller[97889]: 2025-11-24T22:30:16Z|00105|binding|INFO|Setting lport 5efccbc3-b2bb-4d9d-ba64-9382a4b2487b ovn-installed in OVS
Nov 24 22:30:16 compute-0 ovn_controller[97889]: 2025-11-24T22:30:16Z|00106|binding|INFO|Setting lport 5efccbc3-b2bb-4d9d-ba64-9382a4b2487b up in Southbound
Nov 24 22:30:16 compute-0 nova_compute[189608]: 2025-11-24 22:30:16.716 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.715 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[f3135b29-bdd1-4c33-98b5-185a8ddb16d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:16 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.729 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[9321f876-ce39-4fe9-955c-10c22f972d59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:16 compute-0 NetworkManager[56413]: <info>  [1764023416.7479] device (tap5efccbc3-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:30:16 compute-0 NetworkManager[56413]: <info>  [1764023416.7489] device (tap5efccbc3-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.766 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a4c47b-bc10-497d-9c22-ad86378f0f3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.809 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.813 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[a933c3dd-6074-4174-9dbe-b48892e92bd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:16 compute-0 systemd-udevd[252547]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:30:16 compute-0 NetworkManager[56413]: <info>  [1764023416.8292] manager: (tapa164481b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.828 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[e4dd5808-c798-4863-9365-305688feabb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.866 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[87d7f952-f5d5-41eb-b172-57a799d54d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.871 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[7feef7ca-841f-4416-a3ea-2405edadeb1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:16 compute-0 NetworkManager[56413]: <info>  [1764023416.9018] device (tapa164481b-20): carrier: link connected
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.910 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[94b10442-7f07-4cf2-8f22-38352aa101e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.931 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[28490131-85a9-4000-9418-5a20b433939d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa164481b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:a0:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526439, 'reachable_time': 22158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252575, 'error': None, 'target': 'ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.954 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[42c98f47-0f38-430d-a7c5-24089abfff3f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:a098'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526439, 'tstamp': 526439}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252576, 'error': None, 'target': 'ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:16.981 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[d62ef0e5-c447-4f55-b653-16be743577aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa164481b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:a0:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526439, 'reachable_time': 22158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252577, 'error': None, 'target': 'ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:17.035 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[8b62fbb2-0739-42ba-9f91-09f48372c0e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.102 189613 DEBUG nova.compute.manager [req-bd34ba22-2478-40cb-b909-654c6a3cc66f req-7b9b0bbd-1dc2-43b1-a50b-37e3b55e2a88 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Received event network-vif-plugged-8f00051e-bd87-48eb-aba6-5dbf3d527aef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.103 189613 DEBUG oslo_concurrency.lockutils [req-bd34ba22-2478-40cb-b909-654c6a3cc66f req-7b9b0bbd-1dc2-43b1-a50b-37e3b55e2a88 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.104 189613 DEBUG oslo_concurrency.lockutils [req-bd34ba22-2478-40cb-b909-654c6a3cc66f req-7b9b0bbd-1dc2-43b1-a50b-37e3b55e2a88 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.104 189613 DEBUG oslo_concurrency.lockutils [req-bd34ba22-2478-40cb-b909-654c6a3cc66f req-7b9b0bbd-1dc2-43b1-a50b-37e3b55e2a88 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.107 189613 DEBUG nova.compute.manager [req-bd34ba22-2478-40cb-b909-654c6a3cc66f req-7b9b0bbd-1dc2-43b1-a50b-37e3b55e2a88 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Processing event network-vif-plugged-8f00051e-bd87-48eb-aba6-5dbf3d527aef _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.108 189613 DEBUG nova.compute.manager [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.123 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023417.1159701, cf45f1e3-b80d-4213-80aa-995f57a9a476 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.124 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] VM Resumed (Lifecycle Event)
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.127 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.142 189613 INFO nova.virt.libvirt.driver [-] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Instance spawned successfully.
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.145 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.152 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:17.162 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5c6e14-7354-407b-bb0a-8c685e49f573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:17.171 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa164481b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:17.171 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.171 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:17.172 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa164481b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:17 compute-0 kernel: tapa164481b-20: entered promiscuous mode
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.176 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:17 compute-0 NetworkManager[56413]: <info>  [1764023417.1788] manager: (tapa164481b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:17.185 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa164481b-20, col_values=(('external_ids', {'iface-id': 'ce3870c0-48db-470b-8d5d-479134c9b554'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.187 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.187 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:17 compute-0 ovn_controller[97889]: 2025-11-24T22:30:17Z|00107|binding|INFO|Releasing lport ce3870c0-48db-470b-8d5d-479134c9b554 from this chassis (sb_readonly=0)
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.188 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.189 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.189 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.190 189613 DEBUG nova.virt.libvirt.driver [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:17.193 106776 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a164481b-21c8-4cae-a6e9-b470d8a55a1f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a164481b-21c8-4cae-a6e9-b470d8a55a1f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.196 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.198 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:17.198 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[8cbe72ae-1314-469b-bcc2-1df19002eb8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:17.201 106776 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: global
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     log         /dev/log local0 debug
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     log-tag     haproxy-metadata-proxy-a164481b-21c8-4cae-a6e9-b470d8a55a1f
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     user        root
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     group       root
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     maxconn     1024
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     pidfile     /var/lib/neutron/external/pids/a164481b-21c8-4cae-a6e9-b470d8a55a1f.pid.haproxy
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     daemon
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: defaults
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     log global
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     mode http
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     option httplog
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     option dontlognull
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     option http-server-close
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     option forwardfor
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     retries                 3
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     timeout http-request    30s
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     timeout connect         30s
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     timeout client          32s
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     timeout server          32s
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     timeout http-keep-alive 30s
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: listen listener
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     bind 169.254.169.254:80
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:     http-request add-header X-OVN-Network-ID a164481b-21c8-4cae-a6e9-b470d8a55a1f
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:17.202 106776 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'env', 'PROCESS_TAG=haproxy-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a164481b-21c8-4cae-a6e9-b470d8a55a1f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.210 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.261 189613 INFO nova.compute.manager [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Took 15.32 seconds to spawn the instance on the hypervisor.
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.262 189613 DEBUG nova.compute.manager [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.315 189613 DEBUG oslo_concurrency.lockutils [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Acquiring lock "57d86171-0790-4408-ae34-dfc07ee52747" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.316 189613 DEBUG oslo_concurrency.lockutils [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "57d86171-0790-4408-ae34-dfc07ee52747" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.316 189613 DEBUG oslo_concurrency.lockutils [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Acquiring lock "57d86171-0790-4408-ae34-dfc07ee52747-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.316 189613 DEBUG oslo_concurrency.lockutils [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "57d86171-0790-4408-ae34-dfc07ee52747-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.316 189613 DEBUG oslo_concurrency.lockutils [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "57d86171-0790-4408-ae34-dfc07ee52747-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.318 189613 INFO nova.compute.manager [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Terminating instance
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.319 189613 DEBUG nova.compute.manager [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:30:17 compute-0 kernel: tap05ab28a1-e0 (unregistering): left promiscuous mode
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.345 189613 INFO nova.compute.manager [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Took 15.99 seconds to build instance.
Nov 24 22:30:17 compute-0 NetworkManager[56413]: <info>  [1764023417.3561] device (tap05ab28a1-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.362 189613 DEBUG oslo_concurrency.lockutils [None req-b69d7521-d466-469d-8788-66bd770ccb95 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.370 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:17 compute-0 ovn_controller[97889]: 2025-11-24T22:30:17Z|00108|binding|INFO|Releasing lport 05ab28a1-e08f-4aa8-83d6-671fd5720283 from this chassis (sb_readonly=0)
Nov 24 22:30:17 compute-0 ovn_controller[97889]: 2025-11-24T22:30:17Z|00109|binding|INFO|Setting lport 05ab28a1-e08f-4aa8-83d6-671fd5720283 down in Southbound
Nov 24 22:30:17 compute-0 ovn_controller[97889]: 2025-11-24T22:30:17Z|00110|binding|INFO|Removing iface tap05ab28a1-e0 ovn-installed in OVS
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.382 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:17 compute-0 ovn_controller[97889]: 2025-11-24T22:30:17Z|00111|binding|INFO|Releasing lport ce3870c0-48db-470b-8d5d-479134c9b554 from this chassis (sb_readonly=0)
Nov 24 22:30:17 compute-0 ovn_controller[97889]: 2025-11-24T22:30:17Z|00112|binding|INFO|Releasing lport 5660e6d6-677f-4bf6-8ebf-40ac9c648155 from this chassis (sb_readonly=0)
Nov 24 22:30:17 compute-0 ovn_controller[97889]: 2025-11-24T22:30:17Z|00113|binding|INFO|Releasing lport 194093c7-0709-4152-be40-3515887108e2 from this chassis (sb_readonly=0)
Nov 24 22:30:17 compute-0 ovn_controller[97889]: 2025-11-24T22:30:17Z|00114|binding|INFO|Releasing lport 7dcd4ddb-3860-49b9-87ed-1daf692defef from this chassis (sb_readonly=0)
Nov 24 22:30:17 compute-0 ovn_controller[97889]: 2025-11-24T22:30:17Z|00115|binding|INFO|Releasing lport 467754a3-548e-4841-9628-d4a6a4daa2bb from this chassis (sb_readonly=0)
Nov 24 22:30:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:17.385 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:6d:17 10.100.0.3'], port_security=['fa:16:3e:ec:6d:17 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '57d86171-0790-4408-ae34-dfc07ee52747', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f12ed9ff-32cf-41a2-a508-d96ae5468fa1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '47b8cf2705154817a1a23039debe2ac1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6b95a23a-48ea-4462-90d4-e5d4f2776eec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f70cc3f-b6be-42ce-a39a-7f66ee0c1b99, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=05ab28a1-e08f-4aa8-83d6-671fd5720283) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:30:17 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 24 22:30:17 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 9.405s CPU time.
Nov 24 22:30:17 compute-0 systemd-machined[155884]: Machine qemu-8-instance-00000008 terminated.
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.412 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023417.4091094, a3bee9ba-6618-44bd-a443-da9fff6862a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.415 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] VM Started (Lifecycle Event)
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.419 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.438 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.441 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.452 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023417.409202, a3bee9ba-6618-44bd-a443-da9fff6862a9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.452 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] VM Paused (Lifecycle Event)
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.470 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.476 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.507 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.545 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.555 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.594 189613 INFO nova.virt.libvirt.driver [-] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Instance destroyed successfully.
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.594 189613 DEBUG nova.objects.instance [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lazy-loading 'resources' on Instance uuid 57d86171-0790-4408-ae34-dfc07ee52747 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.607 189613 DEBUG nova.virt.libvirt.vif [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:29:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-844916333',display_name='tempest-ServersTestJSON-server-844916333',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-844916333',id=8,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK25sTgX27Ma/LFp7U6VDvFz8O1g4Du+V6L6YZGteUb6y5vGbgt4y46Su5lnL+FrhAwgjJ2IluYL+af3YtR+cgttH5w9PVzWoaxuNx/ODVCLQ6defQSD9k9PpCi90uV0og==',key_name='tempest-keypair-712078251',keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:30:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='47b8cf2705154817a1a23039debe2ac1',ramdisk_id='',reservation_id='r-myjmvl4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-891938329',owner_user_name='tempest-ServersTestJSON-891938329-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:30:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='304529bdd01048709c29df90922b1b2d',uuid=57d86171-0790-4408-ae34-dfc07ee52747,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "address": "fa:16:3e:ec:6d:17", "network": {"id": "f12ed9ff-32cf-41a2-a508-d96ae5468fa1", "bridge": "br-int", "label": "tempest-ServersTestJSON-2010443651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47b8cf2705154817a1a23039debe2ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ab28a1-e0", "ovs_interfaceid": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.608 189613 DEBUG nova.network.os_vif_util [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Converting VIF {"id": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "address": "fa:16:3e:ec:6d:17", "network": {"id": "f12ed9ff-32cf-41a2-a508-d96ae5468fa1", "bridge": "br-int", "label": "tempest-ServersTestJSON-2010443651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47b8cf2705154817a1a23039debe2ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ab28a1-e0", "ovs_interfaceid": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.608 189613 DEBUG nova.network.os_vif_util [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=05ab28a1-e08f-4aa8-83d6-671fd5720283,network=Network(f12ed9ff-32cf-41a2-a508-d96ae5468fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ab28a1-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.608 189613 DEBUG os_vif [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=05ab28a1-e08f-4aa8-83d6-671fd5720283,network=Network(f12ed9ff-32cf-41a2-a508-d96ae5468fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ab28a1-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.610 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.610 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05ab28a1-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.612 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.614 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.614 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.616 189613 INFO os_vif [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=05ab28a1-e08f-4aa8-83d6-671fd5720283,network=Network(f12ed9ff-32cf-41a2-a508-d96ae5468fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ab28a1-e0')
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.617 189613 INFO nova.virt.libvirt.driver [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Deleting instance files /var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747_del
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.618 189613 INFO nova.virt.libvirt.driver [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Deletion of /var/lib/nova/instances/57d86171-0790-4408-ae34-dfc07ee52747_del complete
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.632 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.633 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.633 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c559bc590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.642 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance cf45f1e3-b80d-4213-80aa-995f57a9a476 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 24 22:30:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:17.643 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/cf45f1e3-b80d-4213-80aa-995f57a9a476 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}64060fdce0de69c84bc4ec1fbe9dbdfbc4dd57ffec34862b3445d9841d3967a2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.672 189613 INFO nova.compute.manager [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Took 0.35 seconds to destroy the instance on the hypervisor.
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.673 189613 DEBUG oslo.service.loopingcall [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.673 189613 DEBUG nova.compute.manager [-] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:30:17 compute-0 nova_compute[189608]: 2025-11-24 22:30:17.673 189613 DEBUG nova.network.neutron [-] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:30:17 compute-0 podman[252631]: 2025-11-24 22:30:17.754790294 +0000 UTC m=+0.077517909 container create 077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 24 22:30:17 compute-0 systemd[1]: Started libpod-conmon-077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93.scope.
Nov 24 22:30:17 compute-0 podman[252631]: 2025-11-24 22:30:17.70830243 +0000 UTC m=+0.031030045 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 22:30:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 22:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f70b734fc8588ddef69b4bd6072f4426711e6cbc50b7da65732f497ffbd8ca19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 22:30:17 compute-0 podman[252631]: 2025-11-24 22:30:17.903554426 +0000 UTC m=+0.226282101 container init 077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 24 22:30:17 compute-0 podman[252631]: 2025-11-24 22:30:17.914389933 +0000 UTC m=+0.237117558 container start 077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 22:30:17 compute-0 neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f[252646]: [NOTICE]   (252650) : New worker (252652) forked
Nov 24 22:30:17 compute-0 neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f[252646]: [NOTICE]   (252650) : Loading success.
Nov 24 22:30:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:18.014 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 05ab28a1-e08f-4aa8-83d6-671fd5720283 in datapath f12ed9ff-32cf-41a2-a508-d96ae5468fa1 unbound from our chassis
Nov 24 22:30:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:18.018 106776 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f12ed9ff-32cf-41a2-a508-d96ae5468fa1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 22:30:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:18.020 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c64c3a-4cfa-451a-bc33-5f1ef2d795ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:18.022 106776 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1 namespace which is not needed anymore
Nov 24 22:30:18 compute-0 neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1[252244]: [NOTICE]   (252249) : haproxy version is 2.8.14-c23fe91
Nov 24 22:30:18 compute-0 neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1[252244]: [NOTICE]   (252249) : path to executable is /usr/sbin/haproxy
Nov 24 22:30:18 compute-0 neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1[252244]: [WARNING]  (252249) : Exiting Master process...
Nov 24 22:30:18 compute-0 neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1[252244]: [ALERT]    (252249) : Current worker (252254) exited with code 143 (Terminated)
Nov 24 22:30:18 compute-0 neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1[252244]: [WARNING]  (252249) : All workers exited. Exiting... (0)
Nov 24 22:30:18 compute-0 systemd[1]: libpod-2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d.scope: Deactivated successfully.
Nov 24 22:30:18 compute-0 podman[252678]: 2025-11-24 22:30:18.243122708 +0000 UTC m=+0.076425216 container died 2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 22:30:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d-userdata-shm.mount: Deactivated successfully.
Nov 24 22:30:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-3deecc54c960d81e76e6bd51ce45d85c7e71e60b1e6510423000e18bd54df2cf-merged.mount: Deactivated successfully.
Nov 24 22:30:18 compute-0 podman[252678]: 2025-11-24 22:30:18.307692273 +0000 UTC m=+0.140994761 container cleanup 2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 24 22:30:18 compute-0 nova_compute[189608]: 2025-11-24 22:30:18.323 189613 DEBUG nova.network.neutron [req-fb821067-005e-4751-a2df-103e356806f9 req-4f74b549-2a4b-4588-b5c9-470cc70b5b41 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updated VIF entry in instance network info cache for port 5efccbc3-b2bb-4d9d-ba64-9382a4b2487b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:30:18 compute-0 nova_compute[189608]: 2025-11-24 22:30:18.324 189613 DEBUG nova.network.neutron [req-fb821067-005e-4751-a2df-103e356806f9 req-4f74b549-2a4b-4588-b5c9-470cc70b5b41 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updating instance_info_cache with network_info: [{"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:18 compute-0 systemd[1]: libpod-conmon-2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d.scope: Deactivated successfully.
Nov 24 22:30:18 compute-0 nova_compute[189608]: 2025-11-24 22:30:18.338 189613 DEBUG oslo_concurrency.lockutils [req-fb821067-005e-4751-a2df-103e356806f9 req-4f74b549-2a4b-4588-b5c9-470cc70b5b41 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:18.371 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1853 Content-Type: application/json Date: Mon, 24 Nov 2025 22:30:17 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-8b2621cf-ceb4-4d83-9363-84cb1eed49a2 x-openstack-request-id: req-8b2621cf-ceb4-4d83-9363-84cb1eed49a2 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 24 22:30:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:18.371 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "cf45f1e3-b80d-4213-80aa-995f57a9a476", "name": "tempest-TestNetworkBasicOps-server-501882066", "status": "ACTIVE", "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "user_id": "1599850e48894151b7909b89547cd9e2", "metadata": {}, "hostId": "849202e3a49512045426ce99b0d42958e14c8e1abf51abef0d3150ec", "image": {"id": "ec71d7d5-c197-4331-bf8d-e2de71a8419f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ec71d7d5-c197-4331-bf8d-e2de71a8419f"}]}, "flavor": {"id": "a49f1e6c-1051-4dea-812e-0063121444a0", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a49f1e6c-1051-4dea-812e-0063121444a0"}]}, "created": "2025-11-24T22:30:00Z", "updated": "2025-11-24T22:30:17Z", "addresses": {"tempest-network-smoke--162777084": [{"version": 4, "addr": "10.100.0.11", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d2:89:1f"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/cf45f1e3-b80d-4213-80aa-995f57a9a476"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/cf45f1e3-b80d-4213-80aa-995f57a9a476"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-1667007681", "OS-SRV-USG:launched_at": "2025-11-24T22:30:17.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1926462168"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 24 22:30:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:18.371 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/cf45f1e3-b80d-4213-80aa-995f57a9a476 used request id req-8b2621cf-ceb4-4d83-9363-84cb1eed49a2 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 24 22:30:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:18.373 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cf45f1e3-b80d-4213-80aa-995f57a9a476', 'name': 'tempest-TestNetworkBasicOps-server-501882066', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4ac27d3d1c734f4bab455262f79d3106', 'user_id': '1599850e48894151b7909b89547cd9e2', 'hostId': '849202e3a49512045426ce99b0d42958e14c8e1abf51abef0d3150ec', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:30:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:18.376 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a3bee9ba-6618-44bd-a443-da9fff6862a9 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 24 22:30:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:18.377 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a3bee9ba-6618-44bd-a443-da9fff6862a9 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}64060fdce0de69c84bc4ec1fbe9dbdfbc4dd57ffec34862b3445d9841d3967a2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 24 22:30:18 compute-0 podman[252708]: 2025-11-24 22:30:18.41247143 +0000 UTC m=+0.070036998 container remove 2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:30:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:18.421 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[c5220fa3-e762-4c3d-b5b2-2706ba7bfb5d]: (4, ('Mon Nov 24 10:30:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1 (2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d)\n2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d\nMon Nov 24 10:30:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1 (2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d)\n2ff2082550131d910b235bfee129f5b3efceba0639347ccb4f3d906c58f7d45d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:18.431 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[7d512e3a-4de2-4b46-9aed-7800991dbcd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:18.433 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf12ed9ff-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:18 compute-0 nova_compute[189608]: 2025-11-24 22:30:18.437 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:18 compute-0 kernel: tapf12ed9ff-30: left promiscuous mode
Nov 24 22:30:18 compute-0 nova_compute[189608]: 2025-11-24 22:30:18.450 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:18 compute-0 nova_compute[189608]: 2025-11-24 22:30:18.452 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:18.457 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1c2ac7-3f8d-4b39-8479-7b2c3068dd36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:18.475 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[cfaab872-e6f2-4d0c-905b-dbabe1af3ef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:18.477 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[92a689ed-6ac3-4236-a95d-d97706ee4642]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:18.524 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[5edebee0-edae-4e5c-97f2-111bcc1e552e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525411, 'reachable_time': 42669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252721, 'error': None, 'target': 'ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:18 compute-0 systemd[1]: run-netns-ovnmeta\x2df12ed9ff\x2d32cf\x2d41a2\x2da508\x2dd96ae5468fa1.mount: Deactivated successfully.
Nov 24 22:30:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:18.531 106891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f12ed9ff-32cf-41a2-a508-d96ae5468fa1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 22:30:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:18.531 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb251a1-23e9-4630-8049-cd7963d50cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:18 compute-0 nova_compute[189608]: 2025-11-24 22:30:18.920 189613 DEBUG nova.network.neutron [-] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:18 compute-0 nova_compute[189608]: 2025-11-24 22:30:18.959 189613 INFO nova.compute.manager [-] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Took 1.29 seconds to deallocate network for instance.
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.012 189613 DEBUG oslo_concurrency.lockutils [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.013 189613 DEBUG oslo_concurrency.lockutils [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.032 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.127 189613 DEBUG nova.network.neutron [req-ae972016-4437-47ee-97c4-4c2418f4af63 req-68b5d36e-7d66-4bc0-80d5-982272e8e1af c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Updated VIF entry in instance network info cache for port 05ab28a1-e08f-4aa8-83d6-671fd5720283. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.128 189613 DEBUG nova.network.neutron [req-ae972016-4437-47ee-97c4-4c2418f4af63 req-68b5d36e-7d66-4bc0-80d5-982272e8e1af c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Updating instance_info_cache with network_info: [{"id": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "address": "fa:16:3e:ec:6d:17", "network": {"id": "f12ed9ff-32cf-41a2-a508-d96ae5468fa1", "bridge": "br-int", "label": "tempest-ServersTestJSON-2010443651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47b8cf2705154817a1a23039debe2ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ab28a1-e0", "ovs_interfaceid": "05ab28a1-e08f-4aa8-83d6-671fd5720283", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.147 189613 DEBUG oslo_concurrency.lockutils [req-ae972016-4437-47ee-97c4-4c2418f4af63 req-68b5d36e-7d66-4bc0-80d5-982272e8e1af c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-57d86171-0790-4408-ae34-dfc07ee52747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.177 189613 DEBUG nova.compute.provider_tree [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.198 189613 DEBUG nova.scheduler.client.report [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.225 189613 DEBUG oslo_concurrency.lockutils [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.257 189613 INFO nova.scheduler.client.report [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Deleted allocations for instance 57d86171-0790-4408-ae34-dfc07ee52747
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.331 189613 DEBUG oslo_concurrency.lockutils [None req-c1198a63-3903-42fb-ac00-f7c0288b0bc1 304529bdd01048709c29df90922b1b2d 47b8cf2705154817a1a23039debe2ac1 - - default default] Lock "57d86171-0790-4408-ae34-dfc07ee52747" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:19.515 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1691 Content-Type: application/json Date: Mon, 24 Nov 2025 22:30:18 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-7bdef916-7380-4f64-9df1-70ac8e7439d0 x-openstack-request-id: req-7bdef916-7380-4f64-9df1-70ac8e7439d0 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 24 22:30:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:19.515 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a3bee9ba-6618-44bd-a443-da9fff6862a9", "name": "te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx", "status": "BUILD", "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "user_id": "fcf527fb124b42b9ab6a20cc0938b39f", "metadata": {"metering.server_group": "c6477657-e9b0-476c-83b3-9dc474e946c6"}, "hostId": "81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b", "image": {"id": "ea88776c-3c0b-4e74-99b4-08aadc81390f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ea88776c-3c0b-4e74-99b4-08aadc81390f"}]}, "flavor": {"id": "a49f1e6c-1051-4dea-812e-0063121444a0", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a49f1e6c-1051-4dea-812e-0063121444a0"}]}, "created": "2025-11-24T22:30:03Z", "updated": "2025-11-24T22:30:06Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a3bee9ba-6618-44bd-a443-da9fff6862a9"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a3bee9ba-6618-44bd-a443-da9fff6862a9"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "key_name": null, "OS-SRV-USG:launched_at": null, "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "spawning", "OS-EXT-STS:vm_state": "building", "OS-EXT-STS:power_state": 0, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 24 22:30:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:19.515 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a3bee9ba-6618-44bd-a443-da9fff6862a9 used request id req-7bdef916-7380-4f64-9df1-70ac8e7439d0 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 24 22:30:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:19.518 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a3bee9ba-6618-44bd-a443-da9fff6862a9', 'name': 'te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'paused', 'tenant_id': '4a6957a775da42c9b535753d6b0279d6', 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'hostId': '81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b', 'status': 'paused', 'metadata': {'metering.server_group': 'c6477657-e9b0-476c-83b3-9dc474e946c6'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:30:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:19.520 14 ERROR ceilometer.compute.virt.libvirt.utils [-] Fail to get domain uuid 57d86171-0790-4408-ae34-dfc07ee52747 metadata, libvirtError: Domain not found: no domain with matching uuid '57d86171-0790-4408-ae34-dfc07ee52747' (instance-00000008)
Nov 24 22:30:19 compute-0 ceilometer_agent_compute[200333]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '57d86171-0790-4408-ae34-dfc07ee52747' (instance-00000008)
Nov 24 22:30:19 compute-0 ceilometer_agent_compute[200333]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '57d86171-0790-4408-ae34-dfc07ee52747' (instance-00000008)
Nov 24 22:30:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:19.523 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 24 22:30:19 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:19.524 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}64060fdce0de69c84bc4ec1fbe9dbdfbc4dd57ffec34862b3445d9841d3967a2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.596 189613 DEBUG nova.compute.manager [req-b1ca434d-ba81-438a-97f7-6f95804971dd req-e25f7b80-13c8-4996-b90d-5558ebdf23fe c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Received event network-vif-plugged-8f00051e-bd87-48eb-aba6-5dbf3d527aef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.596 189613 DEBUG oslo_concurrency.lockutils [req-b1ca434d-ba81-438a-97f7-6f95804971dd req-e25f7b80-13c8-4996-b90d-5558ebdf23fe c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.596 189613 DEBUG oslo_concurrency.lockutils [req-b1ca434d-ba81-438a-97f7-6f95804971dd req-e25f7b80-13c8-4996-b90d-5558ebdf23fe c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.596 189613 DEBUG oslo_concurrency.lockutils [req-b1ca434d-ba81-438a-97f7-6f95804971dd req-e25f7b80-13c8-4996-b90d-5558ebdf23fe c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.597 189613 DEBUG nova.compute.manager [req-b1ca434d-ba81-438a-97f7-6f95804971dd req-e25f7b80-13c8-4996-b90d-5558ebdf23fe c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] No waiting events found dispatching network-vif-plugged-8f00051e-bd87-48eb-aba6-5dbf3d527aef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.597 189613 WARNING nova.compute.manager [req-b1ca434d-ba81-438a-97f7-6f95804971dd req-e25f7b80-13c8-4996-b90d-5558ebdf23fe c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Received unexpected event network-vif-plugged-8f00051e-bd87-48eb-aba6-5dbf3d527aef for instance with vm_state active and task_state None.
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.597 189613 DEBUG nova.compute.manager [req-b1ca434d-ba81-438a-97f7-6f95804971dd req-e25f7b80-13c8-4996-b90d-5558ebdf23fe c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Received event network-vif-deleted-05ab28a1-e08f-4aa8-83d6-671fd5720283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.597 189613 INFO nova.compute.manager [req-b1ca434d-ba81-438a-97f7-6f95804971dd req-e25f7b80-13c8-4996-b90d-5558ebdf23fe c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Neutron deleted interface 05ab28a1-e08f-4aa8-83d6-671fd5720283; detaching it from the instance and deleting it from the info cache
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.597 189613 DEBUG nova.network.neutron [req-b1ca434d-ba81-438a-97f7-6f95804971dd req-e25f7b80-13c8-4996-b90d-5558ebdf23fe c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.603 189613 DEBUG nova.compute.manager [req-b1ca434d-ba81-438a-97f7-6f95804971dd req-e25f7b80-13c8-4996-b90d-5558ebdf23fe c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Detach interface failed, port_id=05ab28a1-e08f-4aa8-83d6-671fd5720283, reason: Instance 57d86171-0790-4408-ae34-dfc07ee52747 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.664 189613 DEBUG nova.compute.manager [req-9d9b873d-92b7-49c2-97c3-d23ae5bf415d req-581dcd8d-0d2f-47fa-b930-89121cc96403 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received event network-changed-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.665 189613 DEBUG nova.compute.manager [req-9d9b873d-92b7-49c2-97c3-d23ae5bf415d req-581dcd8d-0d2f-47fa-b930-89121cc96403 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Refreshing instance network info cache due to event network-changed-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.665 189613 DEBUG oslo_concurrency.lockutils [req-9d9b873d-92b7-49c2-97c3-d23ae5bf415d req-581dcd8d-0d2f-47fa-b930-89121cc96403 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.665 189613 DEBUG oslo_concurrency.lockutils [req-9d9b873d-92b7-49c2-97c3-d23ae5bf415d req-581dcd8d-0d2f-47fa-b930-89121cc96403 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:19 compute-0 nova_compute[189608]: 2025-11-24 22:30:19.665 189613 DEBUG nova.network.neutron [req-9d9b873d-92b7-49c2-97c3-d23ae5bf415d req-581dcd8d-0d2f-47fa-b930-89121cc96403 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Refreshing network info cache for port fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.339 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1996 Content-Type: application/json Date: Mon, 24 Nov 2025 22:30:19 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-daf91037-aebf-4a45-9a08-78e44e8567b9 x-openstack-request-id: req-daf91037-aebf-4a45-9a08-78e44e8567b9 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.339 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a", "name": "tempest-AttachInterfacesUnderV243Test-server-476448184", "status": "ACTIVE", "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "user_id": "d2ee7a1723f8477f92f62974f0676bd8", "metadata": {}, "hostId": "3b9950230fdcb379aa9072d7805d07d5ef9094ffbbf5531e9809f6b0", "image": {"id": "ec71d7d5-c197-4331-bf8d-e2de71a8419f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ec71d7d5-c197-4331-bf8d-e2de71a8419f"}]}, "flavor": {"id": "a49f1e6c-1051-4dea-812e-0063121444a0", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a49f1e6c-1051-4dea-812e-0063121444a0"}]}, "created": "2025-11-24T22:29:10Z", "updated": "2025-11-24T22:29:22Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1769316562-network": [{"version": 4, "addr": "10.100.0.13", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:84:e8:3d"}, {"version": 4, "addr": "192.168.122.184", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:84:e8:3d"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1779492647", "OS-SRV-USG:launched_at": "2025-11-24T22:29:22.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--828281728"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.339 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a used request id req-daf91037-aebf-4a45-9a08-78e44e8567b9 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.340 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8b851edf-b3aa-4ca0-a142-8dd0d0e6270a', 'name': 'tempest-AttachInterfacesUnderV243Test-server-476448184', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '5225a786b1b64fcbbd2af0a1b5082c92', 'user_id': 'd2ee7a1723f8477f92f62974f0676bd8', 'hostId': '3b9950230fdcb379aa9072d7805d07d5ef9094ffbbf5531e9809f6b0', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.342 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f238e71a-660e-497c-8472-193245387bcf from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.343 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f238e71a-660e-497c-8472-193245387bcf -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}64060fdce0de69c84bc4ec1fbe9dbdfbc4dd57ffec34862b3445d9841d3967a2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 24 22:30:20 compute-0 podman[252727]: 2025-11-24 22:30:20.55944604 +0000 UTC m=+0.114051625 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 22:30:20 compute-0 podman[252725]: 2025-11-24 22:30:20.571517915 +0000 UTC m=+0.126253224 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, managed_by=edpm_ansible, name=ubi9, vcs-type=git, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, version=9.4, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm)
Nov 24 22:30:20 compute-0 podman[252726]: 2025-11-24 22:30:20.580930727 +0000 UTC m=+0.129816314 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=edpm_ansible)
Nov 24 22:30:20 compute-0 sshd-session[252722]: Invalid user api from 185.217.1.246 port 63756
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.864 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1856 Content-Type: application/json Date: Mon, 24 Nov 2025 22:30:20 GMT Keep-Alive: timeout=5, max=97 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-fb42b4d5-098c-4bde-891f-176df8e26c8f x-openstack-request-id: req-fb42b4d5-098c-4bde-891f-176df8e26c8f _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.864 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "f238e71a-660e-497c-8472-193245387bcf", "name": "tempest-ServerActionsTestJSON-server-1585588029", "status": "ACTIVE", "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "user_id": "11288fa7771048b4a8faf1d6485ab059", "metadata": {}, "hostId": "27625e982f38e3650ffe5ce8e3be255c7a5bc7b5228df6055671ee8e", "image": {"id": "ec71d7d5-c197-4331-bf8d-e2de71a8419f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ec71d7d5-c197-4331-bf8d-e2de71a8419f"}]}, "flavor": {"id": "a49f1e6c-1051-4dea-812e-0063121444a0", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a49f1e6c-1051-4dea-812e-0063121444a0"}]}, "created": "2025-11-24T22:29:57Z", "updated": "2025-11-24T22:30:11Z", "addresses": {"tempest-ServerActionsTestJSON-98517945-network": [{"version": 4, "addr": "10.100.0.12", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:40:76:1e"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/f238e71a-660e-497c-8472-193245387bcf"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/f238e71a-660e-497c-8472-193245387bcf"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-731506490", "OS-SRV-USG:launched_at": "2025-11-24T22:30:11.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--374312114"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.864 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f238e71a-660e-497c-8472-193245387bcf used request id req-fb42b4d5-098c-4bde-891f-176df8e26c8f request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.866 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f238e71a-660e-497c-8472-193245387bcf', 'name': 'tempest-ServerActionsTestJSON-server-1585588029', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '97e21ffeec1c4428ba3d70499fc3281f', 'user_id': '11288fa7771048b4a8faf1d6485ab059', 'hostId': '27625e982f38e3650ffe5ce8e3be255c7a5bc7b5228df6055671ee8e', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.866 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.866 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.867 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.868 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:30:20.867157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.873 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for cf45f1e3-b80d-4213-80aa-995f57a9a476 / tap8f00051e-bd inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.873 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.878 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a3bee9ba-6618-44bd-a443-da9fff6862a9 / tap5efccbc3-b2 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.878 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.882 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a / tape0390902-6a inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.883 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.888 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for f238e71a-660e-497c-8472-193245387bcf / tapfdd48bd9-f9 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.888 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.889 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.890 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.890 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.890 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.890 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.891 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.891 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.892 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.891 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:30:20.890849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.892 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.893 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.894 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:30:20.894818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.913 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.914 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.933 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.934 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.960 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.961 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.987 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.989 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.990 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.991 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.992 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.992 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.993 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:20 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:20.993 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:30:20.993012) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.064 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.065 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.134 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.135 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.197 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.read.bytes volume: 31001088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.197 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.261 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.261 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.262 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.263 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.263 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.263 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.263 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.264 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:30:21.263820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.264 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.read.latency volume: 748983982 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.265 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.read.latency volume: 3424366 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.266 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.267 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.267 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.read.latency volume: 1146156701 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.268 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.read.latency volume: 119848273 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.269 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.read.latency volume: 795423047 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.269 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.read.latency volume: 3095386 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.270 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.271 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.271 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.272 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.273 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:30:21.272699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.307 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/cpu volume: 3930000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.343 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/cpu volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.378 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/cpu volume: 34920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.437 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/cpu volume: 9680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.439 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.439 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.440 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.440 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.440 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.441 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.441 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.441 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:30:21.441244) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.442 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.443 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.443 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.444 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.read.requests volume: 1131 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.444 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.445 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.445 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.446 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.447 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.448 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.448 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.449 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.449 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.449 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:30:21.448979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.450 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.450 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.451 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.452 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.452 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.453 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.454 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.455 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.455 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 nova_compute[189608]: 2025-11-24 22:30:21.455 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.457 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.457 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:30:21.457039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.458 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.458 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.459 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.460 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.write.bytes volume: 72937472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.460 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.461 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.462 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.464 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.464 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.465 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.465 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.465 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.466 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:30:21.466504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.466 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.467 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.468 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.469 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.469 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.write.latency volume: 4861263528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.470 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.470 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.471 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.472 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.473 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.473 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.474 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.474 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.475 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.475 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:30:21.475046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.476 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.476 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.477 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.477 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.478 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.write.requests volume: 309 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.479 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.479 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.480 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.482 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.482 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.483 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.483 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:30:21.483595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.484 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.484 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/power.state volume: 3 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.485 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.486 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.486 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.487 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.487 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.488 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.488 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.488 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:30:21.488823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.489 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.490 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.490 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.491 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.491 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.492 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.492 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.492 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.492 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.493 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.493 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:30:21.492982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.494 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.494 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.494 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.494 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.495 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.495 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.495 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-24T22:30:21.495242) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.495 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-501882066>, <NovaLikeServer: te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-476448184>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-1585588029>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-501882066>, <NovaLikeServer: te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-476448184>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-1585588029>]
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.496 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:30:21.497172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.499 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.499 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:30:21.499650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.500 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.500 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.501 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.502 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:30:21.502770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.503 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.503 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.504 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.504 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.505 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.506 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:30:21.506395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.507 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.507 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.508 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.509 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.510 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:30:21.510100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.511 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.511 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.511 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.512 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.513 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:30:21.513851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.514 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.514 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.515 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.515 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.516 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.516 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.516 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.517 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.517 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:30:21.517411) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.518 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance cf45f1e3-b80d-4213-80aa-995f57a9a476: ceilometer.compute.pollsters.NoVolumeException
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.519 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.519 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance a3bee9ba-6618-44bd-a443-da9fff6862a9: ceilometer.compute.pollsters.NoVolumeException
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.519 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/memory.usage volume: 42.69140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.520 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.520 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance f238e71a-660e-497c-8472-193245387bcf: ceilometer.compute.pollsters.NoVolumeException
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.520 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.521 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.521 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.521 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.521 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.522 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-24T22:30:21.521750) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.522 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-501882066>, <NovaLikeServer: te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-476448184>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-1585588029>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-501882066>, <NovaLikeServer: te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-476448184>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-1585588029>]
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.523 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.523 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.523 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.523 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.524 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:30:21.523766) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.524 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.524 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.525 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.526 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.526 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.527 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:30:21.527072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.527 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.528 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.528 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.529 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.allocation volume: 30679040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.529 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.529 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.530 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.531 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.531 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.531 14 DEBUG ceilometer.compute.pollsters [-] cf45f1e3-b80d-4213-80aa-995f57a9a476/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.532 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.532 14 DEBUG ceilometer.compute.pollsters [-] 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.532 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:30:21.531785) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:21 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:30:21.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:30:22 compute-0 sshd-session[252722]: Disconnecting invalid user api 185.217.1.246 port 63756: Change of username or service not allowed: (api,ssh-connection) -> (manish,ssh-connection) [preauth]
Nov 24 22:30:22 compute-0 nova_compute[189608]: 2025-11-24 22:30:22.613 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:22 compute-0 nova_compute[189608]: 2025-11-24 22:30:22.793 189613 DEBUG nova.compute.manager [req-f9fcf225-c7c7-49ae-a137-7ed7e674dc47 req-ecb7ac3f-613d-4c99-8f97-16c0b6df4348 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Received event network-changed-8f00051e-bd87-48eb-aba6-5dbf3d527aef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:22 compute-0 nova_compute[189608]: 2025-11-24 22:30:22.794 189613 DEBUG nova.compute.manager [req-f9fcf225-c7c7-49ae-a137-7ed7e674dc47 req-ecb7ac3f-613d-4c99-8f97-16c0b6df4348 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Refreshing instance network info cache due to event network-changed-8f00051e-bd87-48eb-aba6-5dbf3d527aef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:30:22 compute-0 nova_compute[189608]: 2025-11-24 22:30:22.794 189613 DEBUG oslo_concurrency.lockutils [req-f9fcf225-c7c7-49ae-a137-7ed7e674dc47 req-ecb7ac3f-613d-4c99-8f97-16c0b6df4348 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-cf45f1e3-b80d-4213-80aa-995f57a9a476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:22 compute-0 nova_compute[189608]: 2025-11-24 22:30:22.795 189613 DEBUG oslo_concurrency.lockutils [req-f9fcf225-c7c7-49ae-a137-7ed7e674dc47 req-ecb7ac3f-613d-4c99-8f97-16c0b6df4348 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-cf45f1e3-b80d-4213-80aa-995f57a9a476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:22 compute-0 nova_compute[189608]: 2025-11-24 22:30:22.796 189613 DEBUG nova.network.neutron [req-f9fcf225-c7c7-49ae-a137-7ed7e674dc47 req-ecb7ac3f-613d-4c99-8f97-16c0b6df4348 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Refreshing network info cache for port 8f00051e-bd87-48eb-aba6-5dbf3d527aef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.272 189613 DEBUG nova.network.neutron [req-9d9b873d-92b7-49c2-97c3-d23ae5bf415d req-581dcd8d-0d2f-47fa-b930-89121cc96403 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Updated VIF entry in instance network info cache for port fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.273 189613 DEBUG nova.network.neutron [req-9d9b873d-92b7-49c2-97c3-d23ae5bf415d req-581dcd8d-0d2f-47fa-b930-89121cc96403 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Updating instance_info_cache with network_info: [{"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.300 189613 DEBUG oslo_concurrency.lockutils [req-9d9b873d-92b7-49c2-97c3-d23ae5bf415d req-581dcd8d-0d2f-47fa-b930-89121cc96403 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.668 189613 DEBUG nova.compute.manager [req-f01b0aa3-0ed4-4556-8030-8d8d1a002af1 req-c1f593f7-eebe-4166-890b-6f92e8e8dfc9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Received event network-vif-plugged-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.669 189613 DEBUG oslo_concurrency.lockutils [req-f01b0aa3-0ed4-4556-8030-8d8d1a002af1 req-c1f593f7-eebe-4166-890b-6f92e8e8dfc9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.669 189613 DEBUG oslo_concurrency.lockutils [req-f01b0aa3-0ed4-4556-8030-8d8d1a002af1 req-c1f593f7-eebe-4166-890b-6f92e8e8dfc9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.669 189613 DEBUG oslo_concurrency.lockutils [req-f01b0aa3-0ed4-4556-8030-8d8d1a002af1 req-c1f593f7-eebe-4166-890b-6f92e8e8dfc9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.669 189613 DEBUG nova.compute.manager [req-f01b0aa3-0ed4-4556-8030-8d8d1a002af1 req-c1f593f7-eebe-4166-890b-6f92e8e8dfc9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Processing event network-vif-plugged-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.670 189613 DEBUG nova.compute.manager [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.677 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023423.6770568, a3bee9ba-6618-44bd-a443-da9fff6862a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.678 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] VM Resumed (Lifecycle Event)
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.680 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.691 189613 INFO nova.virt.libvirt.driver [-] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Instance spawned successfully.
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.692 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.704 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.713 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.717 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.717 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.718 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.718 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.719 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.719 189613 DEBUG nova.virt.libvirt.driver [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.744 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.785 189613 INFO nova.compute.manager [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Took 17.44 seconds to spawn the instance on the hypervisor.
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.786 189613 DEBUG nova.compute.manager [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.870 189613 INFO nova.compute.manager [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Took 18.13 seconds to build instance.
Nov 24 22:30:23 compute-0 nova_compute[189608]: 2025-11-24 22:30:23.887 189613 DEBUG oslo_concurrency.lockutils [None req-a90f6aca-55e5-461e-a8e2-5176d642a0da fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:24 compute-0 nova_compute[189608]: 2025-11-24 22:30:24.028 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:25 compute-0 nova_compute[189608]: 2025-11-24 22:30:25.095 189613 DEBUG nova.network.neutron [req-f9fcf225-c7c7-49ae-a137-7ed7e674dc47 req-ecb7ac3f-613d-4c99-8f97-16c0b6df4348 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Updated VIF entry in instance network info cache for port 8f00051e-bd87-48eb-aba6-5dbf3d527aef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:30:25 compute-0 nova_compute[189608]: 2025-11-24 22:30:25.096 189613 DEBUG nova.network.neutron [req-f9fcf225-c7c7-49ae-a137-7ed7e674dc47 req-ecb7ac3f-613d-4c99-8f97-16c0b6df4348 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Updating instance_info_cache with network_info: [{"id": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "address": "fa:16:3e:d2:89:1f", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f00051e-bd", "ovs_interfaceid": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:25 compute-0 nova_compute[189608]: 2025-11-24 22:30:25.127 189613 DEBUG oslo_concurrency.lockutils [req-f9fcf225-c7c7-49ae-a137-7ed7e674dc47 req-ecb7ac3f-613d-4c99-8f97-16c0b6df4348 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-cf45f1e3-b80d-4213-80aa-995f57a9a476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:25 compute-0 podman[252782]: 2025-11-24 22:30:25.552471724 +0000 UTC m=+0.105451728 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:30:25 compute-0 nova_compute[189608]: 2025-11-24 22:30:25.813 189613 DEBUG nova.compute.manager [req-b592aa1f-6cfa-4de8-86b7-5c47a59238a8 req-415c6e67-a8dd-4929-985b-bac90202c57b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Received event network-vif-plugged-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:25 compute-0 nova_compute[189608]: 2025-11-24 22:30:25.817 189613 DEBUG oslo_concurrency.lockutils [req-b592aa1f-6cfa-4de8-86b7-5c47a59238a8 req-415c6e67-a8dd-4929-985b-bac90202c57b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:25 compute-0 nova_compute[189608]: 2025-11-24 22:30:25.818 189613 DEBUG oslo_concurrency.lockutils [req-b592aa1f-6cfa-4de8-86b7-5c47a59238a8 req-415c6e67-a8dd-4929-985b-bac90202c57b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:25 compute-0 nova_compute[189608]: 2025-11-24 22:30:25.819 189613 DEBUG oslo_concurrency.lockutils [req-b592aa1f-6cfa-4de8-86b7-5c47a59238a8 req-415c6e67-a8dd-4929-985b-bac90202c57b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:25 compute-0 nova_compute[189608]: 2025-11-24 22:30:25.820 189613 DEBUG nova.compute.manager [req-b592aa1f-6cfa-4de8-86b7-5c47a59238a8 req-415c6e67-a8dd-4929-985b-bac90202c57b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] No waiting events found dispatching network-vif-plugged-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:30:25 compute-0 nova_compute[189608]: 2025-11-24 22:30:25.821 189613 WARNING nova.compute.manager [req-b592aa1f-6cfa-4de8-86b7-5c47a59238a8 req-415c6e67-a8dd-4929-985b-bac90202c57b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Received unexpected event network-vif-plugged-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b for instance with vm_state active and task_state None.
Nov 24 22:30:26 compute-0 nova_compute[189608]: 2025-11-24 22:30:26.569 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:27 compute-0 nova_compute[189608]: 2025-11-24 22:30:27.616 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:27 compute-0 nova_compute[189608]: 2025-11-24 22:30:27.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:30:27 compute-0 nova_compute[189608]: 2025-11-24 22:30:27.796 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:30:27 compute-0 nova_compute[189608]: 2025-11-24 22:30:27.824 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:30:28 compute-0 podman[252808]: 2025-11-24 22:30:28.557135385 +0000 UTC m=+0.092048651 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:30:28 compute-0 podman[252807]: 2025-11-24 22:30:28.632230069 +0000 UTC m=+0.155337708 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 22:30:29 compute-0 nova_compute[189608]: 2025-11-24 22:30:29.033 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:29 compute-0 podman[203795]: time="2025-11-24T22:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:30:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33224 "" "Go-http-client/1.1"
Nov 24 22:30:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6191 "" "Go-http-client/1.1"
Nov 24 22:30:29 compute-0 ovn_controller[97889]: 2025-11-24T22:30:29Z|00116|binding|INFO|Releasing lport ce3870c0-48db-470b-8d5d-479134c9b554 from this chassis (sb_readonly=0)
Nov 24 22:30:29 compute-0 ovn_controller[97889]: 2025-11-24T22:30:29Z|00117|binding|INFO|Releasing lport 5660e6d6-677f-4bf6-8ebf-40ac9c648155 from this chassis (sb_readonly=0)
Nov 24 22:30:29 compute-0 ovn_controller[97889]: 2025-11-24T22:30:29Z|00118|binding|INFO|Releasing lport 7dcd4ddb-3860-49b9-87ed-1daf692defef from this chassis (sb_readonly=0)
Nov 24 22:30:29 compute-0 ovn_controller[97889]: 2025-11-24T22:30:29Z|00119|binding|INFO|Releasing lport 467754a3-548e-4841-9628-d4a6a4daa2bb from this chassis (sb_readonly=0)
Nov 24 22:30:29 compute-0 nova_compute[189608]: 2025-11-24 22:30:29.929 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:30 compute-0 ovn_controller[97889]: 2025-11-24T22:30:30Z|00120|binding|INFO|Releasing lport ce3870c0-48db-470b-8d5d-479134c9b554 from this chassis (sb_readonly=0)
Nov 24 22:30:30 compute-0 ovn_controller[97889]: 2025-11-24T22:30:30Z|00121|binding|INFO|Releasing lport 5660e6d6-677f-4bf6-8ebf-40ac9c648155 from this chassis (sb_readonly=0)
Nov 24 22:30:30 compute-0 ovn_controller[97889]: 2025-11-24T22:30:30Z|00122|binding|INFO|Releasing lport 7dcd4ddb-3860-49b9-87ed-1daf692defef from this chassis (sb_readonly=0)
Nov 24 22:30:30 compute-0 ovn_controller[97889]: 2025-11-24T22:30:30Z|00123|binding|INFO|Releasing lport 467754a3-548e-4841-9628-d4a6a4daa2bb from this chassis (sb_readonly=0)
Nov 24 22:30:30 compute-0 nova_compute[189608]: 2025-11-24 22:30:30.845 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:31 compute-0 sshd-session[252805]: Invalid user manish from 185.217.1.246 port 28287
Nov 24 22:30:31 compute-0 openstack_network_exporter[205945]: ERROR   22:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:30:31 compute-0 openstack_network_exporter[205945]: ERROR   22:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:30:31 compute-0 openstack_network_exporter[205945]: ERROR   22:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:30:31 compute-0 openstack_network_exporter[205945]: ERROR   22:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:30:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:30:31 compute-0 openstack_network_exporter[205945]: ERROR   22:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:30:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:30:32 compute-0 nova_compute[189608]: 2025-11-24 22:30:32.596 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764023417.589387, 57d86171-0790-4408-ae34-dfc07ee52747 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:32 compute-0 nova_compute[189608]: 2025-11-24 22:30:32.598 189613 INFO nova.compute.manager [-] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] VM Stopped (Lifecycle Event)
Nov 24 22:30:32 compute-0 nova_compute[189608]: 2025-11-24 22:30:32.622 189613 DEBUG nova.compute.manager [None req-ed0db495-6d17-4587-9c2d-c45630eb6495 - - - - - -] [instance: 57d86171-0790-4408-ae34-dfc07ee52747] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:32 compute-0 nova_compute[189608]: 2025-11-24 22:30:32.623 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:33 compute-0 sshd-session[252805]: Disconnecting invalid user manish 185.217.1.246 port 28287: Change of username or service not allowed: (manish,ssh-connection) -> (t128,ssh-connection) [preauth]
Nov 24 22:30:33 compute-0 nova_compute[189608]: 2025-11-24 22:30:33.315 189613 DEBUG nova.objects.instance [None req-0dba2159-2eb3-45e8-b3bb-bc3b91a5790d d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lazy-loading 'flavor' on Instance uuid 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:33 compute-0 nova_compute[189608]: 2025-11-24 22:30:33.395 189613 DEBUG oslo_concurrency.lockutils [None req-0dba2159-2eb3-45e8-b3bb-bc3b91a5790d d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquiring lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:33 compute-0 nova_compute[189608]: 2025-11-24 22:30:33.396 189613 DEBUG oslo_concurrency.lockutils [None req-0dba2159-2eb3-45e8-b3bb-bc3b91a5790d d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquired lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:33 compute-0 nova_compute[189608]: 2025-11-24 22:30:33.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:30:33 compute-0 nova_compute[189608]: 2025-11-24 22:30:33.827 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:33 compute-0 nova_compute[189608]: 2025-11-24 22:30:33.827 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:33 compute-0 nova_compute[189608]: 2025-11-24 22:30:33.828 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:33 compute-0 nova_compute[189608]: 2025-11-24 22:30:33.828 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.029 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.059 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.114 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.120 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.197 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.229 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.321 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.322 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.424 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.436 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.506 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.509 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.596 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.609 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.678 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.679 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:34 compute-0 nova_compute[189608]: 2025-11-24 22:30:34.775 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.149 189613 DEBUG nova.network.neutron [None req-0dba2159-2eb3-45e8-b3bb-bc3b91a5790d d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.314 189613 DEBUG nova.compute.manager [req-86c14614-03e0-4cae-b7ec-47f5cdfca7d2 req-f732237b-9bea-4960-bced-0d42dc65b01b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Received event network-changed-e0390902-6ae3-485d-b497-f57a8cca001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.315 189613 DEBUG nova.compute.manager [req-86c14614-03e0-4cae-b7ec-47f5cdfca7d2 req-f732237b-9bea-4960-bced-0d42dc65b01b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Refreshing instance network info cache due to event network-changed-e0390902-6ae3-485d-b497-f57a8cca001c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.316 189613 DEBUG oslo_concurrency.lockutils [req-86c14614-03e0-4cae-b7ec-47f5cdfca7d2 req-f732237b-9bea-4960-bced-0d42dc65b01b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.460 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.461 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4715MB free_disk=72.09610748291016GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.462 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.462 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.727 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.728 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance f238e71a-660e-497c-8472-193245387bcf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.728 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance cf45f1e3-b80d-4213-80aa-995f57a9a476 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.728 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.729 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.729 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.829 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing inventories for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.942 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating ProviderTree inventory for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.944 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating inventory in ProviderTree for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.964 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing aggregate associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 22:30:35 compute-0 nova_compute[189608]: 2025-11-24 22:30:35.991 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing trait associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, traits: HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 22:30:36 compute-0 nova_compute[189608]: 2025-11-24 22:30:36.113 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:30:36 compute-0 nova_compute[189608]: 2025-11-24 22:30:36.134 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:30:36 compute-0 nova_compute[189608]: 2025-11-24 22:30:36.159 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:30:36 compute-0 nova_compute[189608]: 2025-11-24 22:30:36.160 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:37 compute-0 ovn_controller[97889]: 2025-11-24T22:30:37Z|00124|binding|INFO|Releasing lport ce3870c0-48db-470b-8d5d-479134c9b554 from this chassis (sb_readonly=0)
Nov 24 22:30:37 compute-0 ovn_controller[97889]: 2025-11-24T22:30:37Z|00125|binding|INFO|Releasing lport 5660e6d6-677f-4bf6-8ebf-40ac9c648155 from this chassis (sb_readonly=0)
Nov 24 22:30:37 compute-0 ovn_controller[97889]: 2025-11-24T22:30:37Z|00126|binding|INFO|Releasing lport 7dcd4ddb-3860-49b9-87ed-1daf692defef from this chassis (sb_readonly=0)
Nov 24 22:30:37 compute-0 ovn_controller[97889]: 2025-11-24T22:30:37Z|00127|binding|INFO|Releasing lport 467754a3-548e-4841-9628-d4a6a4daa2bb from this chassis (sb_readonly=0)
Nov 24 22:30:37 compute-0 nova_compute[189608]: 2025-11-24 22:30:37.538 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:37 compute-0 podman[252877]: 2025-11-24 22:30:37.616739388 +0000 UTC m=+0.166721191 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:30:37 compute-0 nova_compute[189608]: 2025-11-24 22:30:37.626 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:37 compute-0 sshd-session[252876]: Invalid user sol from 45.148.10.240 port 42386
Nov 24 22:30:37 compute-0 sshd-session[252876]: Connection closed by invalid user sol 45.148.10.240 port 42386 [preauth]
Nov 24 22:30:37 compute-0 nova_compute[189608]: 2025-11-24 22:30:37.982 189613 DEBUG nova.network.neutron [None req-0dba2159-2eb3-45e8-b3bb-bc3b91a5790d d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updating instance_info_cache with network_info: [{"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:38 compute-0 nova_compute[189608]: 2025-11-24 22:30:38.010 189613 DEBUG oslo_concurrency.lockutils [None req-0dba2159-2eb3-45e8-b3bb-bc3b91a5790d d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Releasing lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:38 compute-0 nova_compute[189608]: 2025-11-24 22:30:38.012 189613 DEBUG nova.compute.manager [None req-0dba2159-2eb3-45e8-b3bb-bc3b91a5790d d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 24 22:30:38 compute-0 nova_compute[189608]: 2025-11-24 22:30:38.013 189613 DEBUG nova.compute.manager [None req-0dba2159-2eb3-45e8-b3bb-bc3b91a5790d d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] network_info to inject: |[{"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 24 22:30:38 compute-0 nova_compute[189608]: 2025-11-24 22:30:38.015 189613 DEBUG oslo_concurrency.lockutils [req-86c14614-03e0-4cae-b7ec-47f5cdfca7d2 req-f732237b-9bea-4960-bced-0d42dc65b01b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:38 compute-0 nova_compute[189608]: 2025-11-24 22:30:38.016 189613 DEBUG nova.network.neutron [req-86c14614-03e0-4cae-b7ec-47f5cdfca7d2 req-f732237b-9bea-4960-bced-0d42dc65b01b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Refreshing network info cache for port e0390902-6ae3-485d-b497-f57a8cca001c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:30:38 compute-0 nova_compute[189608]: 2025-11-24 22:30:38.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:30:38 compute-0 nova_compute[189608]: 2025-11-24 22:30:38.796 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:30:38 compute-0 nova_compute[189608]: 2025-11-24 22:30:38.797 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:30:38 compute-0 nova_compute[189608]: 2025-11-24 22:30:38.798 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:30:38 compute-0 nova_compute[189608]: 2025-11-24 22:30:38.799 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 22:30:39 compute-0 nova_compute[189608]: 2025-11-24 22:30:39.040 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:39 compute-0 nova_compute[189608]: 2025-11-24 22:30:39.810 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:30:40 compute-0 nova_compute[189608]: 2025-11-24 22:30:40.068 189613 DEBUG nova.objects.instance [None req-e05495c4-937e-4ecc-b75d-c069eb814a2c d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lazy-loading 'flavor' on Instance uuid 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:40 compute-0 nova_compute[189608]: 2025-11-24 22:30:40.098 189613 DEBUG oslo_concurrency.lockutils [None req-e05495c4-937e-4ecc-b75d-c069eb814a2c d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquiring lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:40 compute-0 sshd-session[252875]: Invalid user t128 from 185.217.1.246 port 1442
Nov 24 22:30:40 compute-0 sshd-session[252875]: Disconnecting invalid user t128 185.217.1.246 port 1442: Change of username or service not allowed: (t128,ssh-connection) -> (backup,ssh-connection) [preauth]
Nov 24 22:30:41 compute-0 nova_compute[189608]: 2025-11-24 22:30:41.091 189613 DEBUG nova.network.neutron [req-86c14614-03e0-4cae-b7ec-47f5cdfca7d2 req-f732237b-9bea-4960-bced-0d42dc65b01b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updated VIF entry in instance network info cache for port e0390902-6ae3-485d-b497-f57a8cca001c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:30:41 compute-0 nova_compute[189608]: 2025-11-24 22:30:41.092 189613 DEBUG nova.network.neutron [req-86c14614-03e0-4cae-b7ec-47f5cdfca7d2 req-f732237b-9bea-4960-bced-0d42dc65b01b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updating instance_info_cache with network_info: [{"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:41 compute-0 nova_compute[189608]: 2025-11-24 22:30:41.110 189613 DEBUG oslo_concurrency.lockutils [req-86c14614-03e0-4cae-b7ec-47f5cdfca7d2 req-f732237b-9bea-4960-bced-0d42dc65b01b c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:41 compute-0 nova_compute[189608]: 2025-11-24 22:30:41.111 189613 DEBUG oslo_concurrency.lockutils [None req-e05495c4-937e-4ecc-b75d-c069eb814a2c d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquired lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:41 compute-0 nova_compute[189608]: 2025-11-24 22:30:41.343 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:41 compute-0 nova_compute[189608]: 2025-11-24 22:30:41.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:30:42 compute-0 podman[252901]: 2025-11-24 22:30:42.550288764 +0000 UTC m=+0.109568985 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 24 22:30:42 compute-0 nova_compute[189608]: 2025-11-24 22:30:42.598 189613 DEBUG nova.network.neutron [None req-e05495c4-937e-4ecc-b75d-c069eb814a2c d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:30:42 compute-0 nova_compute[189608]: 2025-11-24 22:30:42.628 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:42 compute-0 nova_compute[189608]: 2025-11-24 22:30:42.740 189613 DEBUG nova.compute.manager [req-25bf1a2d-bb95-4132-a6c4-42be55a61cbc req-b0f1ff24-4b04-4b75-b7d9-8cdffd57a999 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Received event network-changed-e0390902-6ae3-485d-b497-f57a8cca001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:42 compute-0 nova_compute[189608]: 2025-11-24 22:30:42.742 189613 DEBUG nova.compute.manager [req-25bf1a2d-bb95-4132-a6c4-42be55a61cbc req-b0f1ff24-4b04-4b75-b7d9-8cdffd57a999 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Refreshing instance network info cache due to event network-changed-e0390902-6ae3-485d-b497-f57a8cca001c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:30:42 compute-0 nova_compute[189608]: 2025-11-24 22:30:42.742 189613 DEBUG oslo_concurrency.lockutils [req-25bf1a2d-bb95-4132-a6c4-42be55a61cbc req-b0f1ff24-4b04-4b75-b7d9-8cdffd57a999 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:42 compute-0 nova_compute[189608]: 2025-11-24 22:30:42.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.044 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.170 189613 DEBUG nova.network.neutron [None req-e05495c4-937e-4ecc-b75d-c069eb814a2c d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updating instance_info_cache with network_info: [{"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.202 189613 DEBUG oslo_concurrency.lockutils [None req-e05495c4-937e-4ecc-b75d-c069eb814a2c d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Releasing lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.203 189613 DEBUG nova.compute.manager [None req-e05495c4-937e-4ecc-b75d-c069eb814a2c d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.204 189613 DEBUG nova.compute.manager [None req-e05495c4-937e-4ecc-b75d-c069eb814a2c d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] network_info to inject: |[{"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.209 189613 DEBUG oslo_concurrency.lockutils [req-25bf1a2d-bb95-4132-a6c4-42be55a61cbc req-b0f1ff24-4b04-4b75-b7d9-8cdffd57a999 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.210 189613 DEBUG nova.network.neutron [req-25bf1a2d-bb95-4132-a6c4-42be55a61cbc req-b0f1ff24-4b04-4b75-b7d9-8cdffd57a999 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Refreshing network info cache for port e0390902-6ae3-485d-b497-f57a8cca001c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.988 189613 DEBUG oslo_concurrency.lockutils [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquiring lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.989 189613 DEBUG oslo_concurrency.lockutils [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.990 189613 DEBUG oslo_concurrency.lockutils [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquiring lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.990 189613 DEBUG oslo_concurrency.lockutils [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.991 189613 DEBUG oslo_concurrency.lockutils [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.992 189613 INFO nova.compute.manager [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Terminating instance
Nov 24 22:30:44 compute-0 nova_compute[189608]: 2025-11-24 22:30:44.993 189613 DEBUG nova.compute.manager [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:30:45 compute-0 kernel: tape0390902-6a (unregistering): left promiscuous mode
Nov 24 22:30:45 compute-0 NetworkManager[56413]: <info>  [1764023445.0446] device (tape0390902-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:30:45 compute-0 ovn_controller[97889]: 2025-11-24T22:30:45Z|00128|binding|INFO|Releasing lport e0390902-6ae3-485d-b497-f57a8cca001c from this chassis (sb_readonly=0)
Nov 24 22:30:45 compute-0 ovn_controller[97889]: 2025-11-24T22:30:45Z|00129|binding|INFO|Setting lport e0390902-6ae3-485d-b497-f57a8cca001c down in Southbound
Nov 24 22:30:45 compute-0 ovn_controller[97889]: 2025-11-24T22:30:45Z|00130|binding|INFO|Removing iface tape0390902-6a ovn-installed in OVS
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.066 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.071 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.082 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:e8:3d 10.100.0.13'], port_security=['fa:16:3e:84:e8:3d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8b851edf-b3aa-4ca0-a142-8dd0d0e6270a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c09fe20f-09f5-4457-8c18-2dd55de423b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5225a786b1b64fcbbd2af0a1b5082c92', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f2367eaf-5847-48eb-a9d7-e37430a35fff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e56029b-3de0-40b6-9ab5-3053975c41b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=e0390902-6ae3-485d-b497-f57a8cca001c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.083 106776 INFO neutron.agent.ovn.metadata.agent [-] Port e0390902-6ae3-485d-b497-f57a8cca001c in datapath c09fe20f-09f5-4457-8c18-2dd55de423b7 unbound from our chassis
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.085 106776 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c09fe20f-09f5-4457-8c18-2dd55de423b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.086 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb891af-542f-4813-abcb-12bb07a0a13f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.087 106776 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7 namespace which is not needed anymore
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.092 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:45 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 24 22:30:45 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 44.055s CPU time.
Nov 24 22:30:45 compute-0 systemd-machined[155884]: Machine qemu-6-instance-00000006 terminated.
Nov 24 22:30:45 compute-0 podman[252921]: 2025-11-24 22:30:45.224807888 +0000 UTC m=+0.160649883 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.225 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:45 compute-0 sshd-session[252918]: Invalid user backup from 185.217.1.246 port 22894
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.248 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.271 189613 INFO nova.virt.libvirt.driver [-] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Instance destroyed successfully.
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.271 189613 DEBUG nova.objects.instance [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lazy-loading 'resources' on Instance uuid 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.285 189613 DEBUG nova.virt.libvirt.vif [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:29:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-476448184',display_name='tempest-AttachInterfacesUnderV243Test-server-476448184',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-476448184',id=6,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBModyKpHo+bUle696Y53IH9hCC0Nmu0qTbd1dYeKChZbasisMOqUyA99gseDuuBNddhIrU0ChnVA7KG8QVFP+O3BAeUOzrsyIrYuwW2ipaQtlPgdQM4pYzzTX/M2GYBy6Q==',key_name='tempest-keypair-1779492647',keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:29:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5225a786b1b64fcbbd2af0a1b5082c92',ramdisk_id='',reservation_id='r-lj5trgax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-929420375',owner_user_name='tempest-AttachInterfacesUnderV243Test-929420375-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:30:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d2ee7a1723f8477f92f62974f0676bd8',uuid=8b851edf-b3aa-4ca0-a142-8dd0d0e6270a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.286 189613 DEBUG nova.network.os_vif_util [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Converting VIF {"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.287 189613 DEBUG nova.network.os_vif_util [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:e8:3d,bridge_name='br-int',has_traffic_filtering=True,id=e0390902-6ae3-485d-b497-f57a8cca001c,network=Network(c09fe20f-09f5-4457-8c18-2dd55de423b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0390902-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.287 189613 DEBUG os_vif [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:e8:3d,bridge_name='br-int',has_traffic_filtering=True,id=e0390902-6ae3-485d-b497-f57a8cca001c,network=Network(c09fe20f-09f5-4457-8c18-2dd55de423b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0390902-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.289 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.289 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0390902-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.291 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.294 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:30:45 compute-0 neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7[251515]: [NOTICE]   (251538) : haproxy version is 2.8.14-c23fe91
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.297 189613 INFO os_vif [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:e8:3d,bridge_name='br-int',has_traffic_filtering=True,id=e0390902-6ae3-485d-b497-f57a8cca001c,network=Network(c09fe20f-09f5-4457-8c18-2dd55de423b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0390902-6a')
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.297 189613 INFO nova.virt.libvirt.driver [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Deleting instance files /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a_del
Nov 24 22:30:45 compute-0 neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7[251515]: [NOTICE]   (251538) : path to executable is /usr/sbin/haproxy
Nov 24 22:30:45 compute-0 neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7[251515]: [WARNING]  (251538) : Exiting Master process...
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.298 189613 INFO nova.virt.libvirt.driver [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Deletion of /var/lib/nova/instances/8b851edf-b3aa-4ca0-a142-8dd0d0e6270a_del complete
Nov 24 22:30:45 compute-0 neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7[251515]: [ALERT]    (251538) : Current worker (251540) exited with code 143 (Terminated)
Nov 24 22:30:45 compute-0 neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7[251515]: [WARNING]  (251538) : All workers exited. Exiting... (0)
Nov 24 22:30:45 compute-0 systemd[1]: libpod-faf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d.scope: Deactivated successfully.
Nov 24 22:30:45 compute-0 podman[252963]: 2025-11-24 22:30:45.310672006 +0000 UTC m=+0.081454072 container died faf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 22:30:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-faf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d-userdata-shm.mount: Deactivated successfully.
Nov 24 22:30:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba1ed220580f7dfe34bd3547337bff8714309264e691441f2058eea1b0805f7d-merged.mount: Deactivated successfully.
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.359 189613 INFO nova.compute.manager [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Took 0.37 seconds to destroy the instance on the hypervisor.
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.360 189613 DEBUG oslo.service.loopingcall [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.360 189613 DEBUG nova.compute.manager [-] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.360 189613 DEBUG nova.network.neutron [-] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:30:45 compute-0 podman[252963]: 2025-11-24 22:30:45.388725411 +0000 UTC m=+0.159507477 container cleanup faf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 22:30:45 compute-0 systemd[1]: libpod-conmon-faf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d.scope: Deactivated successfully.
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.447 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:45 compute-0 podman[253006]: 2025-11-24 22:30:45.538424353 +0000 UTC m=+0.119276408 container remove faf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.554 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[384e7999-2bc5-4916-88e5-2b178d289920]: (4, ('Mon Nov 24 10:30:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7 (faf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d)\nfaf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d\nMon Nov 24 10:30:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7 (faf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d)\nfaf204fcd1bece95f7146446c8f002aec55d033b96981e54e29e7ddcdfa4707d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.557 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[03d6b3e4-97a2-46e6-8a26-c62031fb4926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.559 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc09fe20f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.562 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:45 compute-0 kernel: tapc09fe20f-00: left promiscuous mode
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.570 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[600f0ea2-4148-4da7-99f9-7491238be48f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.584 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.591 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[c1703edd-c4cb-4909-9b2a-5afbe652f521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.593 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[2204c502-69f1-419a-a936-4ac532bb12d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.612 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[71b69f90-2900-46e5-8518-09f20dc021bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520630, 'reachable_time': 42508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253034, 'error': None, 'target': 'ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.615 106891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c09fe20f-09f5-4457-8c18-2dd55de423b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 22:30:45 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:45.616 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[89f05b99-aea5-4392-8ab1-dc112205498d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:45 compute-0 systemd[1]: run-netns-ovnmeta\x2dc09fe20f\x2d09f5\x2d4457\x2d8c18\x2d2dd55de423b7.mount: Deactivated successfully.
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.931 189613 DEBUG nova.network.neutron [req-25bf1a2d-bb95-4132-a6c4-42be55a61cbc req-b0f1ff24-4b04-4b75-b7d9-8cdffd57a999 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updated VIF entry in instance network info cache for port e0390902-6ae3-485d-b497-f57a8cca001c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.932 189613 DEBUG nova.network.neutron [req-25bf1a2d-bb95-4132-a6c4-42be55a61cbc req-b0f1ff24-4b04-4b75-b7d9-8cdffd57a999 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updating instance_info_cache with network_info: [{"id": "e0390902-6ae3-485d-b497-f57a8cca001c", "address": "fa:16:3e:84:e8:3d", "network": {"id": "c09fe20f-09f5-4457-8c18-2dd55de423b7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769316562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5225a786b1b64fcbbd2af0a1b5082c92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0390902-6a", "ovs_interfaceid": "e0390902-6ae3-485d-b497-f57a8cca001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.957 189613 DEBUG oslo_concurrency.lockutils [req-25bf1a2d-bb95-4132-a6c4-42be55a61cbc req-b0f1ff24-4b04-4b75-b7d9-8cdffd57a999 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.984 189613 DEBUG nova.compute.manager [req-5b0d81dd-5d00-49eb-82d4-c703992b03d8 req-88234192-ab74-4407-b869-92c4cbaa7bcb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Received event network-vif-unplugged-e0390902-6ae3-485d-b497-f57a8cca001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.985 189613 DEBUG oslo_concurrency.lockutils [req-5b0d81dd-5d00-49eb-82d4-c703992b03d8 req-88234192-ab74-4407-b869-92c4cbaa7bcb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.986 189613 DEBUG oslo_concurrency.lockutils [req-5b0d81dd-5d00-49eb-82d4-c703992b03d8 req-88234192-ab74-4407-b869-92c4cbaa7bcb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.986 189613 DEBUG oslo_concurrency.lockutils [req-5b0d81dd-5d00-49eb-82d4-c703992b03d8 req-88234192-ab74-4407-b869-92c4cbaa7bcb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.987 189613 DEBUG nova.compute.manager [req-5b0d81dd-5d00-49eb-82d4-c703992b03d8 req-88234192-ab74-4407-b869-92c4cbaa7bcb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] No waiting events found dispatching network-vif-unplugged-e0390902-6ae3-485d-b497-f57a8cca001c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:30:45 compute-0 nova_compute[189608]: 2025-11-24 22:30:45.987 189613 DEBUG nova.compute.manager [req-5b0d81dd-5d00-49eb-82d4-c703992b03d8 req-88234192-ab74-4407-b869-92c4cbaa7bcb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Received event network-vif-unplugged-e0390902-6ae3-485d-b497-f57a8cca001c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 22:30:46 compute-0 nova_compute[189608]: 2025-11-24 22:30:46.466 189613 DEBUG nova.network.neutron [-] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:46 compute-0 nova_compute[189608]: 2025-11-24 22:30:46.480 189613 INFO nova.compute.manager [-] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Took 1.12 seconds to deallocate network for instance.
Nov 24 22:30:46 compute-0 nova_compute[189608]: 2025-11-24 22:30:46.535 189613 DEBUG oslo_concurrency.lockutils [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:46 compute-0 nova_compute[189608]: 2025-11-24 22:30:46.536 189613 DEBUG oslo_concurrency.lockutils [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:46 compute-0 nova_compute[189608]: 2025-11-24 22:30:46.685 189613 DEBUG nova.compute.provider_tree [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:30:46 compute-0 nova_compute[189608]: 2025-11-24 22:30:46.700 189613 DEBUG nova.scheduler.client.report [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:30:46 compute-0 nova_compute[189608]: 2025-11-24 22:30:46.722 189613 DEBUG oslo_concurrency.lockutils [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:46 compute-0 nova_compute[189608]: 2025-11-24 22:30:46.752 189613 INFO nova.scheduler.client.report [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Deleted allocations for instance 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a
Nov 24 22:30:46 compute-0 nova_compute[189608]: 2025-11-24 22:30:46.820 189613 DEBUG oslo_concurrency.lockutils [None req-f29a690a-59f3-4134-b082-735d6509fea7 d2ee7a1723f8477f92f62974f0676bd8 5225a786b1b64fcbbd2af0a1b5082c92 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:46 compute-0 sshd-session[252918]: Disconnecting invalid user backup 185.217.1.246 port 22894: Change of username or service not allowed: (backup,ssh-connection) -> (intern,ssh-connection) [preauth]
Nov 24 22:30:47 compute-0 ovn_controller[97889]: 2025-11-24T22:30:47Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:76:1e 10.100.0.12
Nov 24 22:30:47 compute-0 ovn_controller[97889]: 2025-11-24T22:30:47Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:76:1e 10.100.0.12
Nov 24 22:30:48 compute-0 nova_compute[189608]: 2025-11-24 22:30:48.049 189613 DEBUG nova.compute.manager [req-6415b6bd-cf0f-4280-a241-acb666a295ac req-307e2016-e96e-4d31-a0ac-5613d60290c4 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Received event network-vif-plugged-e0390902-6ae3-485d-b497-f57a8cca001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:48 compute-0 nova_compute[189608]: 2025-11-24 22:30:48.049 189613 DEBUG oslo_concurrency.lockutils [req-6415b6bd-cf0f-4280-a241-acb666a295ac req-307e2016-e96e-4d31-a0ac-5613d60290c4 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:48 compute-0 nova_compute[189608]: 2025-11-24 22:30:48.051 189613 DEBUG oslo_concurrency.lockutils [req-6415b6bd-cf0f-4280-a241-acb666a295ac req-307e2016-e96e-4d31-a0ac-5613d60290c4 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:48 compute-0 nova_compute[189608]: 2025-11-24 22:30:48.052 189613 DEBUG oslo_concurrency.lockutils [req-6415b6bd-cf0f-4280-a241-acb666a295ac req-307e2016-e96e-4d31-a0ac-5613d60290c4 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "8b851edf-b3aa-4ca0-a142-8dd0d0e6270a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:48 compute-0 nova_compute[189608]: 2025-11-24 22:30:48.052 189613 DEBUG nova.compute.manager [req-6415b6bd-cf0f-4280-a241-acb666a295ac req-307e2016-e96e-4d31-a0ac-5613d60290c4 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] No waiting events found dispatching network-vif-plugged-e0390902-6ae3-485d-b497-f57a8cca001c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:30:48 compute-0 nova_compute[189608]: 2025-11-24 22:30:48.054 189613 WARNING nova.compute.manager [req-6415b6bd-cf0f-4280-a241-acb666a295ac req-307e2016-e96e-4d31-a0ac-5613d60290c4 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Received unexpected event network-vif-plugged-e0390902-6ae3-485d-b497-f57a8cca001c for instance with vm_state deleted and task_state None.
Nov 24 22:30:48 compute-0 nova_compute[189608]: 2025-11-24 22:30:48.054 189613 DEBUG nova.compute.manager [req-6415b6bd-cf0f-4280-a241-acb666a295ac req-307e2016-e96e-4d31-a0ac-5613d60290c4 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Received event network-vif-deleted-e0390902-6ae3-485d-b497-f57a8cca001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.049 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.140 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Acquiring lock "7e9cad45-0047-443a-9aae-51409c77ea0e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.140 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.161 189613 DEBUG nova.compute.manager [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.246 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.247 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.259 189613 DEBUG nova.virt.hardware [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.260 189613 INFO nova.compute.claims [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.445 189613 DEBUG nova.compute.provider_tree [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.458 189613 DEBUG nova.scheduler.client.report [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.475 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.476 189613 DEBUG nova.compute.manager [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.521 189613 DEBUG nova.compute.manager [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.521 189613 DEBUG nova.network.neutron [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.550 189613 INFO nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.567 189613 DEBUG nova.compute.manager [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.660 189613 DEBUG nova.compute.manager [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.662 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.663 189613 INFO nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Creating image(s)
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.663 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Acquiring lock "/var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.664 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "/var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.665 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "/var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.686 189613 DEBUG oslo_concurrency.processutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.759 189613 DEBUG oslo_concurrency.processutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.760 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Acquiring lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.761 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.784 189613 DEBUG oslo_concurrency.processutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.810 189613 DEBUG nova.policy [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c237a245573467f9ef112e37f828fa4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ef25c1f070fc464ea634b7f669bcc935', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.852 189613 DEBUG oslo_concurrency.processutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.853 189613 DEBUG oslo_concurrency.processutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.902 189613 DEBUG oslo_concurrency.processutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.903 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.903 189613 DEBUG oslo_concurrency.processutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.966 189613 DEBUG oslo_concurrency.processutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.967 189613 DEBUG nova.virt.disk.api [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Checking if we can resize image /var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:30:49 compute-0 nova_compute[189608]: 2025-11-24 22:30:49.968 189613 DEBUG oslo_concurrency.processutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:50 compute-0 nova_compute[189608]: 2025-11-24 22:30:50.031 189613 DEBUG oslo_concurrency.processutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:50 compute-0 nova_compute[189608]: 2025-11-24 22:30:50.032 189613 DEBUG nova.virt.disk.api [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Cannot resize image /var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:30:50 compute-0 nova_compute[189608]: 2025-11-24 22:30:50.032 189613 DEBUG nova.objects.instance [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lazy-loading 'migration_context' on Instance uuid 7e9cad45-0047-443a-9aae-51409c77ea0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:50 compute-0 nova_compute[189608]: 2025-11-24 22:30:50.046 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:30:50 compute-0 nova_compute[189608]: 2025-11-24 22:30:50.046 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Ensure instance console log exists: /var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:30:50 compute-0 nova_compute[189608]: 2025-11-24 22:30:50.047 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:50 compute-0 nova_compute[189608]: 2025-11-24 22:30:50.047 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:50 compute-0 nova_compute[189608]: 2025-11-24 22:30:50.047 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:50 compute-0 nova_compute[189608]: 2025-11-24 22:30:50.292 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:50 compute-0 nova_compute[189608]: 2025-11-24 22:30:50.508 189613 DEBUG nova.network.neutron [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Successfully created port: 05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 22:30:51 compute-0 ovn_controller[97889]: 2025-11-24T22:30:51Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:89:1f 10.100.0.11
Nov 24 22:30:51 compute-0 ovn_controller[97889]: 2025-11-24T22:30:51Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:89:1f 10.100.0.11
Nov 24 22:30:51 compute-0 podman[253061]: 2025-11-24 22:30:51.557119357 +0000 UTC m=+0.106639144 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:30:51 compute-0 podman[253060]: 2025-11-24 22:30:51.560371928 +0000 UTC m=+0.113043593 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter)
Nov 24 22:30:51 compute-0 podman[253059]: 2025-11-24 22:30:51.561860804 +0000 UTC m=+0.105056775 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, config_id=edpm, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.4, name=ubi9, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 24 22:30:51 compute-0 nova_compute[189608]: 2025-11-24 22:30:51.631 189613 DEBUG nova.network.neutron [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Successfully updated port: 05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:30:51 compute-0 nova_compute[189608]: 2025-11-24 22:30:51.649 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Acquiring lock "refresh_cache-7e9cad45-0047-443a-9aae-51409c77ea0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:51 compute-0 nova_compute[189608]: 2025-11-24 22:30:51.649 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Acquired lock "refresh_cache-7e9cad45-0047-443a-9aae-51409c77ea0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:51 compute-0 nova_compute[189608]: 2025-11-24 22:30:51.650 189613 DEBUG nova.network.neutron [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:30:51 compute-0 nova_compute[189608]: 2025-11-24 22:30:51.776 189613 DEBUG nova.compute.manager [req-bca33bd7-b23c-46f8-bb74-d552dae16145 req-a71564ea-e8a9-47d2-89da-9dcf33c77ccd c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Received event network-changed-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:51 compute-0 nova_compute[189608]: 2025-11-24 22:30:51.777 189613 DEBUG nova.compute.manager [req-bca33bd7-b23c-46f8-bb74-d552dae16145 req-a71564ea-e8a9-47d2-89da-9dcf33c77ccd c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Refreshing instance network info cache due to event network-changed-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:30:51 compute-0 nova_compute[189608]: 2025-11-24 22:30:51.777 189613 DEBUG oslo_concurrency.lockutils [req-bca33bd7-b23c-46f8-bb74-d552dae16145 req-a71564ea-e8a9-47d2-89da-9dcf33c77ccd c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-7e9cad45-0047-443a-9aae-51409c77ea0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:30:51 compute-0 nova_compute[189608]: 2025-11-24 22:30:51.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:30:51 compute-0 nova_compute[189608]: 2025-11-24 22:30:51.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 22:30:51 compute-0 nova_compute[189608]: 2025-11-24 22:30:51.811 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 22:30:51 compute-0 nova_compute[189608]: 2025-11-24 22:30:51.812 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:30:51 compute-0 nova_compute[189608]: 2025-11-24 22:30:51.876 189613 DEBUG nova.network.neutron [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:30:52 compute-0 ovn_controller[97889]: 2025-11-24T22:30:52Z|00131|binding|INFO|Releasing lport ce3870c0-48db-470b-8d5d-479134c9b554 from this chassis (sb_readonly=0)
Nov 24 22:30:52 compute-0 ovn_controller[97889]: 2025-11-24T22:30:52Z|00132|binding|INFO|Releasing lport 5660e6d6-677f-4bf6-8ebf-40ac9c648155 from this chassis (sb_readonly=0)
Nov 24 22:30:52 compute-0 ovn_controller[97889]: 2025-11-24T22:30:52Z|00133|binding|INFO|Releasing lport 7dcd4ddb-3860-49b9-87ed-1daf692defef from this chassis (sb_readonly=0)
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.704 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.832 189613 DEBUG nova.network.neutron [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Updating instance_info_cache with network_info: [{"id": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "address": "fa:16:3e:29:40:00", "network": {"id": "d197f7a3-9f5f-489e-ac56-bdf9c1500396", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1144015615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef25c1f070fc464ea634b7f669bcc935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ca4ec0-d6", "ovs_interfaceid": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.858 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Releasing lock "refresh_cache-7e9cad45-0047-443a-9aae-51409c77ea0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.858 189613 DEBUG nova.compute.manager [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Instance network_info: |[{"id": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "address": "fa:16:3e:29:40:00", "network": {"id": "d197f7a3-9f5f-489e-ac56-bdf9c1500396", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1144015615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef25c1f070fc464ea634b7f669bcc935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ca4ec0-d6", "ovs_interfaceid": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.859 189613 DEBUG oslo_concurrency.lockutils [req-bca33bd7-b23c-46f8-bb74-d552dae16145 req-a71564ea-e8a9-47d2-89da-9dcf33c77ccd c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-7e9cad45-0047-443a-9aae-51409c77ea0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.859 189613 DEBUG nova.network.neutron [req-bca33bd7-b23c-46f8-bb74-d552dae16145 req-a71564ea-e8a9-47d2-89da-9dcf33c77ccd c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Refreshing network info cache for port 05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.865 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Start _get_guest_xml network_info=[{"id": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "address": "fa:16:3e:29:40:00", "network": {"id": "d197f7a3-9f5f-489e-ac56-bdf9c1500396", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1144015615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef25c1f070fc464ea634b7f669bcc935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ca4ec0-d6", "ovs_interfaceid": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.877 189613 WARNING nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.895 189613 DEBUG nova.virt.libvirt.host [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.896 189613 DEBUG nova.virt.libvirt.host [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.905 189613 DEBUG nova.virt.libvirt.host [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.906 189613 DEBUG nova.virt.libvirt.host [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.907 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.908 189613 DEBUG nova.virt.hardware [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:28:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a49f1e6c-1051-4dea-812e-0063121444a0',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.909 189613 DEBUG nova.virt.hardware [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.909 189613 DEBUG nova.virt.hardware [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.910 189613 DEBUG nova.virt.hardware [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.910 189613 DEBUG nova.virt.hardware [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.911 189613 DEBUG nova.virt.hardware [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.911 189613 DEBUG nova.virt.hardware [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.912 189613 DEBUG nova.virt.hardware [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.912 189613 DEBUG nova.virt.hardware [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.913 189613 DEBUG nova.virt.hardware [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.913 189613 DEBUG nova.virt.hardware [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.921 189613 DEBUG nova.virt.libvirt.vif [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:30:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-754153210',display_name='tempest-ServerAddressesTestJSON-server-754153210',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-754153210',id=12,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef25c1f070fc464ea634b7f669bcc935',ramdisk_id='',reservation_id='r-17hmm6kn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1640119929',owner_user_name='tempest-ServerAddressesTestJSON-1640119929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:30:49Z,user_data=None,user_id='3c237a245573467f9ef112e37f828fa4',uuid=7e9cad45-0047-443a-9aae-51409c77ea0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "address": "fa:16:3e:29:40:00", "network": {"id": "d197f7a3-9f5f-489e-ac56-bdf9c1500396", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1144015615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef25c1f070fc464ea634b7f669bcc935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ca4ec0-d6", "ovs_interfaceid": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.922 189613 DEBUG nova.network.os_vif_util [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Converting VIF {"id": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "address": "fa:16:3e:29:40:00", "network": {"id": "d197f7a3-9f5f-489e-ac56-bdf9c1500396", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1144015615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef25c1f070fc464ea634b7f669bcc935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ca4ec0-d6", "ovs_interfaceid": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.923 189613 DEBUG nova.network.os_vif_util [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:40:00,bridge_name='br-int',has_traffic_filtering=True,id=05ca4ec0-d6b5-444a-a18c-1dbbc28c6267,network=Network(d197f7a3-9f5f-489e-ac56-bdf9c1500396),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ca4ec0-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.925 189613 DEBUG nova.objects.instance [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e9cad45-0047-443a-9aae-51409c77ea0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.945 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:30:52 compute-0 nova_compute[189608]:   <uuid>7e9cad45-0047-443a-9aae-51409c77ea0e</uuid>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   <name>instance-0000000c</name>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   <memory>131072</memory>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <nova:name>tempest-ServerAddressesTestJSON-server-754153210</nova:name>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:30:52</nova:creationTime>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <nova:flavor name="m1.nano">
Nov 24 22:30:52 compute-0 nova_compute[189608]:         <nova:memory>128</nova:memory>
Nov 24 22:30:52 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:30:52 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:30:52 compute-0 nova_compute[189608]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 22:30:52 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:30:52 compute-0 nova_compute[189608]:         <nova:user uuid="3c237a245573467f9ef112e37f828fa4">tempest-ServerAddressesTestJSON-1640119929-project-member</nova:user>
Nov 24 22:30:52 compute-0 nova_compute[189608]:         <nova:project uuid="ef25c1f070fc464ea634b7f669bcc935">tempest-ServerAddressesTestJSON-1640119929</nova:project>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="ec71d7d5-c197-4331-bf8d-e2de71a8419f"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:30:52 compute-0 nova_compute[189608]:         <nova:port uuid="05ca4ec0-d6b5-444a-a18c-1dbbc28c6267">
Nov 24 22:30:52 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <system>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <entry name="serial">7e9cad45-0047-443a-9aae-51409c77ea0e</entry>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <entry name="uuid">7e9cad45-0047-443a-9aae-51409c77ea0e</entry>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     </system>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   <os>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   </os>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   <features>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   </features>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk.config"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:29:40:00"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <target dev="tap05ca4ec0-d6"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/console.log" append="off"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <video>
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     </video>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:30:52 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:30:52 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:30:52 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:30:52 compute-0 nova_compute[189608]: </domain>
Nov 24 22:30:52 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.946 189613 DEBUG nova.compute.manager [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Preparing to wait for external event network-vif-plugged-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.947 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Acquiring lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.947 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.947 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.948 189613 DEBUG nova.virt.libvirt.vif [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:30:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-754153210',display_name='tempest-ServerAddressesTestJSON-server-754153210',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-754153210',id=12,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef25c1f070fc464ea634b7f669bcc935',ramdisk_id='',reservation_id='r-17hmm6kn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1640119929',owner_user_name='tempest-ServerAddressesTestJSON-1640119929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:30:49Z,user_data=None,user_id='3c237a245573467f9ef112e37f828fa4',uuid=7e9cad45-0047-443a-9aae-51409c77ea0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "address": "fa:16:3e:29:40:00", "network": {"id": "d197f7a3-9f5f-489e-ac56-bdf9c1500396", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1144015615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef25c1f070fc464ea634b7f669bcc935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ca4ec0-d6", "ovs_interfaceid": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.949 189613 DEBUG nova.network.os_vif_util [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Converting VIF {"id": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "address": "fa:16:3e:29:40:00", "network": {"id": "d197f7a3-9f5f-489e-ac56-bdf9c1500396", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1144015615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef25c1f070fc464ea634b7f669bcc935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ca4ec0-d6", "ovs_interfaceid": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.950 189613 DEBUG nova.network.os_vif_util [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:40:00,bridge_name='br-int',has_traffic_filtering=True,id=05ca4ec0-d6b5-444a-a18c-1dbbc28c6267,network=Network(d197f7a3-9f5f-489e-ac56-bdf9c1500396),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ca4ec0-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.950 189613 DEBUG os_vif [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:40:00,bridge_name='br-int',has_traffic_filtering=True,id=05ca4ec0-d6b5-444a-a18c-1dbbc28c6267,network=Network(d197f7a3-9f5f-489e-ac56-bdf9c1500396),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ca4ec0-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.951 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.952 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.952 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.956 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.957 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05ca4ec0-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.957 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05ca4ec0-d6, col_values=(('external_ids', {'iface-id': '05ca4ec0-d6b5-444a-a18c-1dbbc28c6267', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:40:00', 'vm-uuid': '7e9cad45-0047-443a-9aae-51409c77ea0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.960 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:52 compute-0 NetworkManager[56413]: <info>  [1764023452.9634] manager: (tap05ca4ec0-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.966 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.972 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:52 compute-0 nova_compute[189608]: 2025-11-24 22:30:52.973 189613 INFO os_vif [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:40:00,bridge_name='br-int',has_traffic_filtering=True,id=05ca4ec0-d6b5-444a-a18c-1dbbc28c6267,network=Network(d197f7a3-9f5f-489e-ac56-bdf9c1500396),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ca4ec0-d6')
Nov 24 22:30:53 compute-0 nova_compute[189608]: 2025-11-24 22:30:53.049 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:30:53 compute-0 nova_compute[189608]: 2025-11-24 22:30:53.050 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:30:53 compute-0 nova_compute[189608]: 2025-11-24 22:30:53.050 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] No VIF found with MAC fa:16:3e:29:40:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:30:53 compute-0 nova_compute[189608]: 2025-11-24 22:30:53.051 189613 INFO nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Using config drive
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.008 189613 INFO nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Creating config drive at /var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk.config
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.021 189613 DEBUG oslo_concurrency.processutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5x3gw1db execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:30:54 compute-0 sshd-session[253116]: Invalid user solana from 193.32.162.145 port 44808
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.071 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:54 compute-0 sshd-session[253116]: Connection closed by invalid user solana 193.32.162.145 port 44808 [preauth]
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.185 189613 DEBUG oslo_concurrency.processutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5x3gw1db" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:30:54 compute-0 sshd-session[253057]: Invalid user intern from 185.217.1.246 port 56544
Nov 24 22:30:54 compute-0 kernel: tap05ca4ec0-d6: entered promiscuous mode
Nov 24 22:30:54 compute-0 NetworkManager[56413]: <info>  [1764023454.2978] manager: (tap05ca4ec0-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Nov 24 22:30:54 compute-0 ovn_controller[97889]: 2025-11-24T22:30:54Z|00134|binding|INFO|Claiming lport 05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 for this chassis.
Nov 24 22:30:54 compute-0 ovn_controller[97889]: 2025-11-24T22:30:54Z|00135|binding|INFO|05ca4ec0-d6b5-444a-a18c-1dbbc28c6267: Claiming fa:16:3e:29:40:00 10.100.0.13
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.305 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.315 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:40:00 10.100.0.13'], port_security=['fa:16:3e:29:40:00 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7e9cad45-0047-443a-9aae-51409c77ea0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d197f7a3-9f5f-489e-ac56-bdf9c1500396', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef25c1f070fc464ea634b7f669bcc935', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04110bce-6796-49b7-9743-dca438c40b5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9abadd9-5d64-4429-b2a8-5e5d27f6d565, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=05ca4ec0-d6b5-444a-a18c-1dbbc28c6267) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.318 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 in datapath d197f7a3-9f5f-489e-ac56-bdf9c1500396 bound to our chassis
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.323 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d197f7a3-9f5f-489e-ac56-bdf9c1500396
Nov 24 22:30:54 compute-0 ovn_controller[97889]: 2025-11-24T22:30:54Z|00136|binding|INFO|Setting lport 05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 ovn-installed in OVS
Nov 24 22:30:54 compute-0 ovn_controller[97889]: 2025-11-24T22:30:54Z|00137|binding|INFO|Setting lport 05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 up in Southbound
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.339 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.339 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[e67e042a-ba2d-4ebb-9b7d-39277c2e842b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.340 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd197f7a3-91 in ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 22:30:54 compute-0 systemd-udevd[253134]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.343 240020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd197f7a3-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.343 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[0b37f68d-c9f6-4da9-b9c3-42bb71039fe2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.347 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[79ba035f-66de-4c16-bcb6-f9b02a93a3c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.350 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:54 compute-0 NetworkManager[56413]: <info>  [1764023454.3696] device (tap05ca4ec0-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.368 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[d287d953-d956-458d-80a3-c0285909a2ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 NetworkManager[56413]: <info>  [1764023454.3707] device (tap05ca4ec0-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:30:54 compute-0 systemd-machined[155884]: New machine qemu-12-instance-0000000c.
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.386 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[200ae17d-6c85-485e-b014-eca34b97574e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.425 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5ad8c0-38e0-4ee9-b5b3-235577bdbd5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.433 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc10415-b7f5-4322-8d0b-784d855faa0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 NetworkManager[56413]: <info>  [1764023454.4348] manager: (tapd197f7a3-90): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.477 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[3dae313c-a894-4bf2-a847-baa5dcc9644e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.492 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[10d78dcc-ab3b-49ab-bd25-4bce5387a0bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 NetworkManager[56413]: <info>  [1764023454.5286] device (tapd197f7a3-90): carrier: link connected
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.537 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[f09668a9-a368-4605-945c-108bcc2f88c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.559 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd6df69-422d-40b4-b8ea-e39b133a553e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd197f7a3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:72:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530202, 'reachable_time': 15936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253169, 'error': None, 'target': 'ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.580 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[ee45def2-608a-4fb1-89a4-64a9a892e7b8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5a:72d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530202, 'tstamp': 530202}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253170, 'error': None, 'target': 'ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.595 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.596 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.598 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.604 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[539e4b61-90b3-46ea-985f-0539564c85bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd197f7a3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:72:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530202, 'reachable_time': 15936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253171, 'error': None, 'target': 'ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.650 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9f5e14-0aa5-4f5d-905c-d1a36e3c95a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.719 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[26a6653d-4dfd-4746-93c5-1b9654c35ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.722 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd197f7a3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.722 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.722 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd197f7a3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:54 compute-0 NetworkManager[56413]: <info>  [1764023454.7258] manager: (tapd197f7a3-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.725 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:54 compute-0 kernel: tapd197f7a3-90: entered promiscuous mode
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.729 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.730 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd197f7a3-90, col_values=(('external_ids', {'iface-id': 'efe5b79d-0c1f-42d7-9ee1-5984adff9324'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:54 compute-0 ovn_controller[97889]: 2025-11-24T22:30:54Z|00138|binding|INFO|Releasing lport efe5b79d-0c1f-42d7-9ee1-5984adff9324 from this chassis (sb_readonly=0)
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.734 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.734 106776 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d197f7a3-9f5f-489e-ac56-bdf9c1500396.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d197f7a3-9f5f-489e-ac56-bdf9c1500396.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.736 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff7063a-c8bb-4e80-9cfe-f96e182a661d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.737 106776 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: global
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     log         /dev/log local0 debug
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     log-tag     haproxy-metadata-proxy-d197f7a3-9f5f-489e-ac56-bdf9c1500396
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     user        root
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     group       root
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     maxconn     1024
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     pidfile     /var/lib/neutron/external/pids/d197f7a3-9f5f-489e-ac56-bdf9c1500396.pid.haproxy
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     daemon
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: defaults
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     log global
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     mode http
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     option httplog
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     option dontlognull
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     option http-server-close
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     option forwardfor
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     retries                 3
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     timeout http-request    30s
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     timeout connect         30s
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     timeout client          32s
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     timeout server          32s
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     timeout http-keep-alive 30s
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: listen listener
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     bind 169.254.169.254:80
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:     http-request add-header X-OVN-Network-ID d197f7a3-9f5f-489e-ac56-bdf9c1500396
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 22:30:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:54.737 106776 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396', 'env', 'PROCESS_TAG=haproxy-d197f7a3-9f5f-489e-ac56-bdf9c1500396', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d197f7a3-9f5f-489e-ac56-bdf9c1500396.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.753 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.775 189613 DEBUG nova.network.neutron [req-bca33bd7-b23c-46f8-bb74-d552dae16145 req-a71564ea-e8a9-47d2-89da-9dcf33c77ccd c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Updated VIF entry in instance network info cache for port 05ca4ec0-d6b5-444a-a18c-1dbbc28c6267. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.777 189613 DEBUG nova.network.neutron [req-bca33bd7-b23c-46f8-bb74-d552dae16145 req-a71564ea-e8a9-47d2-89da-9dcf33c77ccd c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Updating instance_info_cache with network_info: [{"id": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "address": "fa:16:3e:29:40:00", "network": {"id": "d197f7a3-9f5f-489e-ac56-bdf9c1500396", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1144015615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef25c1f070fc464ea634b7f669bcc935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ca4ec0-d6", "ovs_interfaceid": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.796 189613 DEBUG oslo_concurrency.lockutils [req-bca33bd7-b23c-46f8-bb74-d552dae16145 req-a71564ea-e8a9-47d2-89da-9dcf33c77ccd c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-7e9cad45-0047-443a-9aae-51409c77ea0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.939 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023454.9387484, 7e9cad45-0047-443a-9aae-51409c77ea0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.939 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] VM Started (Lifecycle Event)
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.956 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.963 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023454.938893, 7e9cad45-0047-443a-9aae-51409c77ea0e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:54 compute-0 nova_compute[189608]: 2025-11-24 22:30:54.964 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] VM Paused (Lifecycle Event)
Nov 24 22:30:55 compute-0 nova_compute[189608]: 2025-11-24 22:30:55.005 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:55 compute-0 nova_compute[189608]: 2025-11-24 22:30:55.011 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:30:55 compute-0 nova_compute[189608]: 2025-11-24 22:30:55.036 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:30:55 compute-0 podman[253210]: 2025-11-24 22:30:55.230473316 +0000 UTC m=+0.077286863 container create 6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 24 22:30:55 compute-0 sshd-session[253057]: Disconnecting invalid user intern 185.217.1.246 port 56544: Change of username or service not allowed: (intern,ssh-connection) -> (git,ssh-connection) [preauth]
Nov 24 22:30:55 compute-0 podman[253210]: 2025-11-24 22:30:55.186120468 +0000 UTC m=+0.032934035 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 22:30:55 compute-0 systemd[1]: Started libpod-conmon-6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496.scope.
Nov 24 22:30:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 22:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee6a6de7d1f184a3c3aabfe161d6b481e3f2af48085d32de5b16ec31e808d12/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 22:30:55 compute-0 podman[253210]: 2025-11-24 22:30:55.359964459 +0000 UTC m=+0.206778026 container init 6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 22:30:55 compute-0 podman[253210]: 2025-11-24 22:30:55.367581646 +0000 UTC m=+0.214395193 container start 6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 24 22:30:55 compute-0 neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396[253225]: [NOTICE]   (253229) : New worker (253231) forked
Nov 24 22:30:55 compute-0 neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396[253225]: [NOTICE]   (253229) : Loading success.
Nov 24 22:30:56 compute-0 podman[253240]: 2025-11-24 22:30:56.602690154 +0000 UTC m=+0.139958050 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.905 189613 DEBUG nova.compute.manager [req-e62a27ab-80c5-420a-8dbf-ec949375ec75 req-39e389c9-56e4-454b-bbdf-1746284023da c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Received event network-vif-plugged-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.905 189613 DEBUG oslo_concurrency.lockutils [req-e62a27ab-80c5-420a-8dbf-ec949375ec75 req-39e389c9-56e4-454b-bbdf-1746284023da c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.907 189613 DEBUG oslo_concurrency.lockutils [req-e62a27ab-80c5-420a-8dbf-ec949375ec75 req-39e389c9-56e4-454b-bbdf-1746284023da c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.907 189613 DEBUG oslo_concurrency.lockutils [req-e62a27ab-80c5-420a-8dbf-ec949375ec75 req-39e389c9-56e4-454b-bbdf-1746284023da c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.907 189613 DEBUG nova.compute.manager [req-e62a27ab-80c5-420a-8dbf-ec949375ec75 req-39e389c9-56e4-454b-bbdf-1746284023da c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Processing event network-vif-plugged-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.908 189613 DEBUG nova.compute.manager [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.914 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023456.9141583, 7e9cad45-0047-443a-9aae-51409c77ea0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.914 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] VM Resumed (Lifecycle Event)
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.917 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.924 189613 INFO nova.virt.libvirt.driver [-] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Instance spawned successfully.
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.924 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.947 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.966 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.967 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.969 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.969 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.971 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.972 189613 DEBUG nova.virt.libvirt.driver [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:30:56 compute-0 nova_compute[189608]: 2025-11-24 22:30:56.980 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:30:57 compute-0 nova_compute[189608]: 2025-11-24 22:30:57.006 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:30:57 compute-0 nova_compute[189608]: 2025-11-24 22:30:57.028 189613 INFO nova.compute.manager [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Took 7.37 seconds to spawn the instance on the hypervisor.
Nov 24 22:30:57 compute-0 nova_compute[189608]: 2025-11-24 22:30:57.029 189613 DEBUG nova.compute.manager [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:30:57 compute-0 nova_compute[189608]: 2025-11-24 22:30:57.106 189613 INFO nova.compute.manager [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Took 7.89 seconds to build instance.
Nov 24 22:30:57 compute-0 nova_compute[189608]: 2025-11-24 22:30:57.119 189613 DEBUG oslo_concurrency.lockutils [None req-e8695de0-f584-4e7c-b0b1-8ba5ce75df20 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:57 compute-0 nova_compute[189608]: 2025-11-24 22:30:57.962 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.297 189613 INFO nova.compute.manager [None req-07d15fb1-20e6-457a-8635-b6b561769bcd 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Get console output
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.418 239876 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.649 189613 DEBUG oslo_concurrency.lockutils [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Acquiring lock "7e9cad45-0047-443a-9aae-51409c77ea0e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.650 189613 DEBUG oslo_concurrency.lockutils [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.650 189613 DEBUG oslo_concurrency.lockutils [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Acquiring lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.651 189613 DEBUG oslo_concurrency.lockutils [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.651 189613 DEBUG oslo_concurrency.lockutils [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.653 189613 INFO nova.compute.manager [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Terminating instance
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.654 189613 DEBUG nova.compute.manager [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:30:58 compute-0 kernel: tap05ca4ec0-d6 (unregistering): left promiscuous mode
Nov 24 22:30:58 compute-0 NetworkManager[56413]: <info>  [1764023458.6927] device (tap05ca4ec0-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:30:58 compute-0 ovn_controller[97889]: 2025-11-24T22:30:58Z|00139|binding|INFO|Releasing lport 05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 from this chassis (sb_readonly=0)
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.707 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:58 compute-0 ovn_controller[97889]: 2025-11-24T22:30:58Z|00140|binding|INFO|Setting lport 05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 down in Southbound
Nov 24 22:30:58 compute-0 ovn_controller[97889]: 2025-11-24T22:30:58Z|00141|binding|INFO|Removing iface tap05ca4ec0-d6 ovn-installed in OVS
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.717 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:58 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:58.730 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:40:00 10.100.0.13'], port_security=['fa:16:3e:29:40:00 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7e9cad45-0047-443a-9aae-51409c77ea0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d197f7a3-9f5f-489e-ac56-bdf9c1500396', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef25c1f070fc464ea634b7f669bcc935', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04110bce-6796-49b7-9743-dca438c40b5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9abadd9-5d64-4429-b2a8-5e5d27f6d565, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=05ca4ec0-d6b5-444a-a18c-1dbbc28c6267) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:30:58 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:58.733 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 in datapath d197f7a3-9f5f-489e-ac56-bdf9c1500396 unbound from our chassis
Nov 24 22:30:58 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:58.738 106776 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d197f7a3-9f5f-489e-ac56-bdf9c1500396, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.740 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:58 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:58.740 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[3733b258-6da7-4bfa-b18c-91ccc5c804b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:58 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:58.743 106776 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396 namespace which is not needed anymore
Nov 24 22:30:58 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 24 22:30:58 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 2.268s CPU time.
Nov 24 22:30:58 compute-0 systemd-machined[155884]: Machine qemu-12-instance-0000000c terminated.
Nov 24 22:30:58 compute-0 podman[253282]: 2025-11-24 22:30:58.816560923 +0000 UTC m=+0.091848894 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:30:58 compute-0 podman[253279]: 2025-11-24 22:30:58.856758812 +0000 UTC m=+0.129592577 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Nov 24 22:30:58 compute-0 kernel: tap05ca4ec0-d6: entered promiscuous mode
Nov 24 22:30:58 compute-0 kernel: tap05ca4ec0-d6 (unregistering): left promiscuous mode
Nov 24 22:30:58 compute-0 NetworkManager[56413]: <info>  [1764023458.8898] manager: (tap05ca4ec0-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.897 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.936 189613 INFO nova.virt.libvirt.driver [-] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Instance destroyed successfully.
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.937 189613 DEBUG nova.objects.instance [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lazy-loading 'resources' on Instance uuid 7e9cad45-0047-443a-9aae-51409c77ea0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.958 189613 DEBUG nova.virt.libvirt.vif [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:30:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-754153210',display_name='tempest-ServerAddressesTestJSON-server-754153210',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-754153210',id=12,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:30:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ef25c1f070fc464ea634b7f669bcc935',ramdisk_id='',reservation_id='r-17hmm6kn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1640119929',owner_user_name='tempest-ServerAddressesTestJSON-1640119929-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:30:57Z,user_data=None,user_id='3c237a245573467f9ef112e37f828fa4',uuid=7e9cad45-0047-443a-9aae-51409c77ea0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "address": "fa:16:3e:29:40:00", "network": {"id": "d197f7a3-9f5f-489e-ac56-bdf9c1500396", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1144015615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef25c1f070fc464ea634b7f669bcc935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ca4ec0-d6", "ovs_interfaceid": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.959 189613 DEBUG nova.network.os_vif_util [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Converting VIF {"id": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "address": "fa:16:3e:29:40:00", "network": {"id": "d197f7a3-9f5f-489e-ac56-bdf9c1500396", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1144015615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef25c1f070fc464ea634b7f669bcc935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ca4ec0-d6", "ovs_interfaceid": "05ca4ec0-d6b5-444a-a18c-1dbbc28c6267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.960 189613 DEBUG nova.network.os_vif_util [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:40:00,bridge_name='br-int',has_traffic_filtering=True,id=05ca4ec0-d6b5-444a-a18c-1dbbc28c6267,network=Network(d197f7a3-9f5f-489e-ac56-bdf9c1500396),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ca4ec0-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.961 189613 DEBUG os_vif [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:40:00,bridge_name='br-int',has_traffic_filtering=True,id=05ca4ec0-d6b5-444a-a18c-1dbbc28c6267,network=Network(d197f7a3-9f5f-489e-ac56-bdf9c1500396),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ca4ec0-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.963 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.965 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05ca4ec0-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.968 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:30:58 compute-0 neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396[253225]: [NOTICE]   (253229) : haproxy version is 2.8.14-c23fe91
Nov 24 22:30:58 compute-0 neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396[253225]: [NOTICE]   (253229) : path to executable is /usr/sbin/haproxy
Nov 24 22:30:58 compute-0 neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396[253225]: [ALERT]    (253229) : Current worker (253231) exited with code 143 (Terminated)
Nov 24 22:30:58 compute-0 neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396[253225]: [WARNING]  (253229) : All workers exited. Exiting... (0)
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.973 189613 INFO os_vif [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:40:00,bridge_name='br-int',has_traffic_filtering=True,id=05ca4ec0-d6b5-444a-a18c-1dbbc28c6267,network=Network(d197f7a3-9f5f-489e-ac56-bdf9c1500396),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ca4ec0-d6')
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.974 189613 INFO nova.virt.libvirt.driver [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Deleting instance files /var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e_del
Nov 24 22:30:58 compute-0 nova_compute[189608]: 2025-11-24 22:30:58.975 189613 INFO nova.virt.libvirt.driver [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Deletion of /var/lib/nova/instances/7e9cad45-0047-443a-9aae-51409c77ea0e_del complete
Nov 24 22:30:58 compute-0 systemd[1]: libpod-6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496.scope: Deactivated successfully.
Nov 24 22:30:58 compute-0 podman[253346]: 2025-11-24 22:30:58.985556694 +0000 UTC m=+0.093128524 container died 6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.024 189613 DEBUG nova.compute.manager [req-7ab65cc8-8061-4438-b6df-acdc1fc169d0 req-657d90f0-fb83-4800-88aa-f157822261e9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Received event network-vif-plugged-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.024 189613 DEBUG oslo_concurrency.lockutils [req-7ab65cc8-8061-4438-b6df-acdc1fc169d0 req-657d90f0-fb83-4800-88aa-f157822261e9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.025 189613 DEBUG oslo_concurrency.lockutils [req-7ab65cc8-8061-4438-b6df-acdc1fc169d0 req-657d90f0-fb83-4800-88aa-f157822261e9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.025 189613 DEBUG oslo_concurrency.lockutils [req-7ab65cc8-8061-4438-b6df-acdc1fc169d0 req-657d90f0-fb83-4800-88aa-f157822261e9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.025 189613 DEBUG nova.compute.manager [req-7ab65cc8-8061-4438-b6df-acdc1fc169d0 req-657d90f0-fb83-4800-88aa-f157822261e9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] No waiting events found dispatching network-vif-plugged-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.026 189613 WARNING nova.compute.manager [req-7ab65cc8-8061-4438-b6df-acdc1fc169d0 req-657d90f0-fb83-4800-88aa-f157822261e9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Received unexpected event network-vif-plugged-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 for instance with vm_state active and task_state deleting.
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.026 189613 DEBUG nova.compute.manager [req-7ab65cc8-8061-4438-b6df-acdc1fc169d0 req-657d90f0-fb83-4800-88aa-f157822261e9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Received event network-vif-unplugged-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.026 189613 DEBUG oslo_concurrency.lockutils [req-7ab65cc8-8061-4438-b6df-acdc1fc169d0 req-657d90f0-fb83-4800-88aa-f157822261e9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.027 189613 DEBUG oslo_concurrency.lockutils [req-7ab65cc8-8061-4438-b6df-acdc1fc169d0 req-657d90f0-fb83-4800-88aa-f157822261e9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.028 189613 DEBUG oslo_concurrency.lockutils [req-7ab65cc8-8061-4438-b6df-acdc1fc169d0 req-657d90f0-fb83-4800-88aa-f157822261e9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.028 189613 DEBUG nova.compute.manager [req-7ab65cc8-8061-4438-b6df-acdc1fc169d0 req-657d90f0-fb83-4800-88aa-f157822261e9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] No waiting events found dispatching network-vif-unplugged-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.028 189613 DEBUG nova.compute.manager [req-7ab65cc8-8061-4438-b6df-acdc1fc169d0 req-657d90f0-fb83-4800-88aa-f157822261e9 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Received event network-vif-unplugged-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 22:30:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496-userdata-shm.mount: Deactivated successfully.
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.041 189613 INFO nova.compute.manager [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Took 0.39 seconds to destroy the instance on the hypervisor.
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.042 189613 DEBUG oslo.service.loopingcall [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.042 189613 DEBUG nova.compute.manager [-] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.043 189613 DEBUG nova.network.neutron [-] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:30:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bee6a6de7d1f184a3c3aabfe161d6b481e3f2af48085d32de5b16ec31e808d12-merged.mount: Deactivated successfully.
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.056 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:59 compute-0 podman[253346]: 2025-11-24 22:30:59.059974037 +0000 UTC m=+0.167545857 container cleanup 6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:30:59 compute-0 systemd[1]: libpod-conmon-6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496.scope: Deactivated successfully.
Nov 24 22:30:59 compute-0 podman[253386]: 2025-11-24 22:30:59.182882206 +0000 UTC m=+0.076450297 container remove 6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 24 22:30:59 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:59.194 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[d8263695-d724-4456-8272-b3b79f1b2d2e]: (4, ('Mon Nov 24 10:30:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396 (6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496)\n6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496\nMon Nov 24 10:30:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396 (6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496)\n6e18d946325f0168b1d8c94a25641ee2a1db3abbebb6c8a42f097c32910e6496\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:59 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:59.196 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[8244735a-c451-4f6a-ae92-278858a67132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:59 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:59.197 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd197f7a3-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.200 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:59 compute-0 kernel: tapd197f7a3-90: left promiscuous mode
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.223 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:59 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:59.227 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[fab92155-a199-4576-9e8d-bad733f05826]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.229 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:30:59 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:59.246 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6b7359-b3ad-4923-a7d2-2152b1fd67a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:59 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:59.248 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[e5def2a6-adf8-42e4-88a1-f413821601d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:59 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:59.266 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[9afeee63-18bb-41bb-8483-1214e97e3952]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530190, 'reachable_time': 22228, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253401, 'error': None, 'target': 'ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:59 compute-0 ovn_controller[97889]: 2025-11-24T22:30:59Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:6c:bb 10.100.0.88
Nov 24 22:30:59 compute-0 systemd[1]: run-netns-ovnmeta\x2dd197f7a3\x2d9f5f\x2d489e\x2dac56\x2dbdf9c1500396.mount: Deactivated successfully.
Nov 24 22:30:59 compute-0 ovn_controller[97889]: 2025-11-24T22:30:59Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:6c:bb 10.100.0.88
Nov 24 22:30:59 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:59.272 106891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d197f7a3-9f5f-489e-ac56-bdf9c1500396 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 22:30:59 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:30:59.272 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[38f253c6-b855-4bca-99e4-d2e014664f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:30:59 compute-0 podman[203795]: time="2025-11-24T22:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:30:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31991 "" "Go-http-client/1.1"
Nov 24 22:30:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5735 "" "Go-http-client/1.1"
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.809 189613 DEBUG nova.network.neutron [-] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.831 189613 INFO nova.compute.manager [-] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Took 0.79 seconds to deallocate network for instance.
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.874 189613 DEBUG oslo_concurrency.lockutils [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:30:59 compute-0 nova_compute[189608]: 2025-11-24 22:30:59.875 189613 DEBUG oslo_concurrency.lockutils [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:00 compute-0 nova_compute[189608]: 2025-11-24 22:31:00.005 189613 DEBUG nova.compute.manager [req-035289ee-7741-448b-b97a-877cced7af34 req-6e5f326d-473a-4807-af6e-7c46219bcfe1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Received event network-vif-deleted-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:31:00 compute-0 nova_compute[189608]: 2025-11-24 22:31:00.042 189613 DEBUG nova.compute.provider_tree [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:31:00 compute-0 nova_compute[189608]: 2025-11-24 22:31:00.062 189613 DEBUG nova.scheduler.client.report [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:31:00 compute-0 nova_compute[189608]: 2025-11-24 22:31:00.092 189613 DEBUG oslo_concurrency.lockutils [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:00 compute-0 nova_compute[189608]: 2025-11-24 22:31:00.139 189613 INFO nova.scheduler.client.report [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Deleted allocations for instance 7e9cad45-0047-443a-9aae-51409c77ea0e
Nov 24 22:31:00 compute-0 nova_compute[189608]: 2025-11-24 22:31:00.229 189613 DEBUG oslo_concurrency.lockutils [None req-b931688f-eb7a-4100-b21a-c59e651d9bc5 3c237a245573467f9ef112e37f828fa4 ef25c1f070fc464ea634b7f669bcc935 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:00 compute-0 nova_compute[189608]: 2025-11-24 22:31:00.263 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764023445.2608612, 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:31:00 compute-0 nova_compute[189608]: 2025-11-24 22:31:00.263 189613 INFO nova.compute.manager [-] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] VM Stopped (Lifecycle Event)
Nov 24 22:31:00 compute-0 nova_compute[189608]: 2025-11-24 22:31:00.280 189613 DEBUG nova.compute.manager [None req-46f45afb-854d-4b5c-8447-15a11b91231b - - - - - -] [instance: 8b851edf-b3aa-4ca0-a142-8dd0d0e6270a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:31:01 compute-0 nova_compute[189608]: 2025-11-24 22:31:01.203 189613 DEBUG nova.compute.manager [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Received event network-vif-plugged-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:31:01 compute-0 nova_compute[189608]: 2025-11-24 22:31:01.204 189613 DEBUG oslo_concurrency.lockutils [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:01 compute-0 nova_compute[189608]: 2025-11-24 22:31:01.206 189613 DEBUG oslo_concurrency.lockutils [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:01 compute-0 nova_compute[189608]: 2025-11-24 22:31:01.206 189613 DEBUG oslo_concurrency.lockutils [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "7e9cad45-0047-443a-9aae-51409c77ea0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:01 compute-0 nova_compute[189608]: 2025-11-24 22:31:01.207 189613 DEBUG nova.compute.manager [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] No waiting events found dispatching network-vif-plugged-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:31:01 compute-0 nova_compute[189608]: 2025-11-24 22:31:01.208 189613 WARNING nova.compute.manager [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Received unexpected event network-vif-plugged-05ca4ec0-d6b5-444a-a18c-1dbbc28c6267 for instance with vm_state deleted and task_state None.
Nov 24 22:31:01 compute-0 nova_compute[189608]: 2025-11-24 22:31:01.209 189613 DEBUG nova.compute.manager [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Received event network-changed-8f00051e-bd87-48eb-aba6-5dbf3d527aef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:31:01 compute-0 nova_compute[189608]: 2025-11-24 22:31:01.210 189613 DEBUG nova.compute.manager [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Refreshing instance network info cache due to event network-changed-8f00051e-bd87-48eb-aba6-5dbf3d527aef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:31:01 compute-0 nova_compute[189608]: 2025-11-24 22:31:01.211 189613 DEBUG oslo_concurrency.lockutils [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-cf45f1e3-b80d-4213-80aa-995f57a9a476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:31:01 compute-0 nova_compute[189608]: 2025-11-24 22:31:01.211 189613 DEBUG oslo_concurrency.lockutils [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-cf45f1e3-b80d-4213-80aa-995f57a9a476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:31:01 compute-0 nova_compute[189608]: 2025-11-24 22:31:01.212 189613 DEBUG nova.network.neutron [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Refreshing network info cache for port 8f00051e-bd87-48eb-aba6-5dbf3d527aef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:31:01 compute-0 openstack_network_exporter[205945]: ERROR   22:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:31:01 compute-0 openstack_network_exporter[205945]: ERROR   22:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:31:01 compute-0 openstack_network_exporter[205945]: ERROR   22:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:31:01 compute-0 openstack_network_exporter[205945]: ERROR   22:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:31:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:31:01 compute-0 openstack_network_exporter[205945]: ERROR   22:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:31:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:31:03 compute-0 sshd-session[253278]: Invalid user git from 185.217.1.246 port 17382
Nov 24 22:31:03 compute-0 nova_compute[189608]: 2025-11-24 22:31:03.968 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:04 compute-0 nova_compute[189608]: 2025-11-24 22:31:04.059 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:04 compute-0 nova_compute[189608]: 2025-11-24 22:31:04.230 189613 DEBUG nova.network.neutron [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Updated VIF entry in instance network info cache for port 8f00051e-bd87-48eb-aba6-5dbf3d527aef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:31:04 compute-0 nova_compute[189608]: 2025-11-24 22:31:04.231 189613 DEBUG nova.network.neutron [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Updating instance_info_cache with network_info: [{"id": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "address": "fa:16:3e:d2:89:1f", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f00051e-bd", "ovs_interfaceid": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:31:04 compute-0 nova_compute[189608]: 2025-11-24 22:31:04.252 189613 DEBUG oslo_concurrency.lockutils [req-7c9f0d77-e576-4269-ab97-eac5e8c63567 req-c335134f-83be-43fd-be7e-e0ce6550c9a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-cf45f1e3-b80d-4213-80aa-995f57a9a476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:31:04 compute-0 ovn_controller[97889]: 2025-11-24T22:31:04Z|00142|binding|INFO|Releasing lport ce3870c0-48db-470b-8d5d-479134c9b554 from this chassis (sb_readonly=0)
Nov 24 22:31:04 compute-0 ovn_controller[97889]: 2025-11-24T22:31:04Z|00143|binding|INFO|Releasing lport 5660e6d6-677f-4bf6-8ebf-40ac9c648155 from this chassis (sb_readonly=0)
Nov 24 22:31:04 compute-0 ovn_controller[97889]: 2025-11-24T22:31:04Z|00144|binding|INFO|Releasing lport 7dcd4ddb-3860-49b9-87ed-1daf692defef from this chassis (sb_readonly=0)
Nov 24 22:31:04 compute-0 nova_compute[189608]: 2025-11-24 22:31:04.486 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:04 compute-0 sshd-session[253278]: Disconnecting invalid user git 185.217.1.246 port 17382: Change of username or service not allowed: (git,ssh-connection) -> (amir,ssh-connection) [preauth]
Nov 24 22:31:08 compute-0 podman[253405]: 2025-11-24 22:31:08.569807679 +0000 UTC m=+0.110174345 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:31:08 compute-0 nova_compute[189608]: 2025-11-24 22:31:08.974 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:09 compute-0 nova_compute[189608]: 2025-11-24 22:31:09.066 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.225 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.227 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.253 189613 DEBUG nova.compute.manager [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.365 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.367 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.382 189613 DEBUG nova.virt.hardware [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.383 189613 INFO nova.compute.claims [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.621 189613 DEBUG nova.compute.provider_tree [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.637 189613 DEBUG nova.scheduler.client.report [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.655 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.656 189613 DEBUG nova.compute.manager [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.710 189613 DEBUG nova.compute.manager [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.711 189613 DEBUG nova.network.neutron [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.727 189613 INFO nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.744 189613 DEBUG nova.compute.manager [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.848 189613 DEBUG nova.compute.manager [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.850 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.850 189613 INFO nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Creating image(s)
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.851 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "/var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.851 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "/var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.852 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "/var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.871 189613 DEBUG oslo_concurrency.processutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.966 189613 DEBUG oslo_concurrency.processutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.967 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.968 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:10 compute-0 nova_compute[189608]: 2025-11-24 22:31:10.980 189613 DEBUG oslo_concurrency.processutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.049 189613 DEBUG oslo_concurrency.processutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.050 189613 DEBUG oslo_concurrency.processutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.131 189613 DEBUG oslo_concurrency.processutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk 1073741824" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.133 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.134 189613 DEBUG oslo_concurrency.processutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.230 189613 DEBUG oslo_concurrency.processutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.233 189613 DEBUG nova.virt.disk.api [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Checking if we can resize image /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.235 189613 DEBUG oslo_concurrency.processutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.326 189613 DEBUG oslo_concurrency.processutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.328 189613 DEBUG nova.virt.disk.api [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Cannot resize image /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.329 189613 DEBUG nova.objects.instance [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lazy-loading 'migration_context' on Instance uuid c6cadef7-2599-4c75-a37d-2d1e6d469a82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.345 189613 DEBUG nova.policy [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1599850e48894151b7909b89547cd9e2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ac27d3d1c734f4bab455262f79d3106', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.351 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.352 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Ensure instance console log exists: /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.353 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.354 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:11 compute-0 nova_compute[189608]: 2025-11-24 22:31:11.355 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:12.079 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:31:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:12.080 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:31:12 compute-0 nova_compute[189608]: 2025-11-24 22:31:12.087 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:12 compute-0 nova_compute[189608]: 2025-11-24 22:31:12.493 189613 DEBUG nova.network.neutron [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Successfully created port: cb21872b-f10a-4b2e-887a-5b0c069f8b46 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 22:31:13 compute-0 sshd-session[253404]: Invalid user amir from 185.217.1.246 port 55217
Nov 24 22:31:13 compute-0 podman[253445]: 2025-11-24 22:31:13.633780848 +0000 UTC m=+0.172228582 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:31:13 compute-0 nova_compute[189608]: 2025-11-24 22:31:13.935 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764023458.9332242, 7e9cad45-0047-443a-9aae-51409c77ea0e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:31:13 compute-0 nova_compute[189608]: 2025-11-24 22:31:13.936 189613 INFO nova.compute.manager [-] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] VM Stopped (Lifecycle Event)
Nov 24 22:31:13 compute-0 nova_compute[189608]: 2025-11-24 22:31:13.956 189613 DEBUG nova.compute.manager [None req-a3f02078-1dc3-4e57-b4f9-eeb78aea98ce - - - - - -] [instance: 7e9cad45-0047-443a-9aae-51409c77ea0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:31:13 compute-0 nova_compute[189608]: 2025-11-24 22:31:13.980 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:14 compute-0 nova_compute[189608]: 2025-11-24 22:31:14.068 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:14 compute-0 sshd-session[253404]: Disconnecting invalid user amir 185.217.1.246 port 55217: Change of username or service not allowed: (amir,ssh-connection) -> (marek,ssh-connection) [preauth]
Nov 24 22:31:15 compute-0 nova_compute[189608]: 2025-11-24 22:31:15.237 189613 DEBUG nova.network.neutron [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Successfully updated port: cb21872b-f10a-4b2e-887a-5b0c069f8b46 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:31:15 compute-0 nova_compute[189608]: 2025-11-24 22:31:15.254 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "refresh_cache-c6cadef7-2599-4c75-a37d-2d1e6d469a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:31:15 compute-0 nova_compute[189608]: 2025-11-24 22:31:15.254 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquired lock "refresh_cache-c6cadef7-2599-4c75-a37d-2d1e6d469a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:31:15 compute-0 nova_compute[189608]: 2025-11-24 22:31:15.255 189613 DEBUG nova.network.neutron [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:31:15 compute-0 nova_compute[189608]: 2025-11-24 22:31:15.376 189613 DEBUG nova.compute.manager [req-528cdd34-d27a-4cc5-b1f1-f75efe68520b req-b7312beb-697b-4f6e-a0d8-2183a3c092c8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Received event network-changed-cb21872b-f10a-4b2e-887a-5b0c069f8b46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:31:15 compute-0 nova_compute[189608]: 2025-11-24 22:31:15.377 189613 DEBUG nova.compute.manager [req-528cdd34-d27a-4cc5-b1f1-f75efe68520b req-b7312beb-697b-4f6e-a0d8-2183a3c092c8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Refreshing instance network info cache due to event network-changed-cb21872b-f10a-4b2e-887a-5b0c069f8b46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:31:15 compute-0 nova_compute[189608]: 2025-11-24 22:31:15.377 189613 DEBUG oslo_concurrency.lockutils [req-528cdd34-d27a-4cc5-b1f1-f75efe68520b req-b7312beb-697b-4f6e-a0d8-2183a3c092c8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-c6cadef7-2599-4c75-a37d-2d1e6d469a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:31:15 compute-0 nova_compute[189608]: 2025-11-24 22:31:15.421 189613 DEBUG nova.network.neutron [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:31:15 compute-0 podman[253465]: 2025-11-24 22:31:15.581172898 +0000 UTC m=+0.125017286 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.120 189613 DEBUG nova.network.neutron [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Updating instance_info_cache with network_info: [{"id": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "address": "fa:16:3e:8f:79:84", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb21872b-f1", "ovs_interfaceid": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.150 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Releasing lock "refresh_cache-c6cadef7-2599-4c75-a37d-2d1e6d469a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.151 189613 DEBUG nova.compute.manager [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Instance network_info: |[{"id": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "address": "fa:16:3e:8f:79:84", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb21872b-f1", "ovs_interfaceid": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.153 189613 DEBUG oslo_concurrency.lockutils [req-528cdd34-d27a-4cc5-b1f1-f75efe68520b req-b7312beb-697b-4f6e-a0d8-2183a3c092c8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-c6cadef7-2599-4c75-a37d-2d1e6d469a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.153 189613 DEBUG nova.network.neutron [req-528cdd34-d27a-4cc5-b1f1-f75efe68520b req-b7312beb-697b-4f6e-a0d8-2183a3c092c8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Refreshing network info cache for port cb21872b-f10a-4b2e-887a-5b0c069f8b46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.160 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Start _get_guest_xml network_info=[{"id": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "address": "fa:16:3e:8f:79:84", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb21872b-f1", "ovs_interfaceid": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.171 189613 WARNING nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.188 189613 DEBUG nova.virt.libvirt.host [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.189 189613 DEBUG nova.virt.libvirt.host [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.197 189613 DEBUG nova.virt.libvirt.host [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.198 189613 DEBUG nova.virt.libvirt.host [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.199 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.200 189613 DEBUG nova.virt.hardware [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:28:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a49f1e6c-1051-4dea-812e-0063121444a0',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.201 189613 DEBUG nova.virt.hardware [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.202 189613 DEBUG nova.virt.hardware [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.203 189613 DEBUG nova.virt.hardware [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.204 189613 DEBUG nova.virt.hardware [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.205 189613 DEBUG nova.virt.hardware [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.206 189613 DEBUG nova.virt.hardware [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.208 189613 DEBUG nova.virt.hardware [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.208 189613 DEBUG nova.virt.hardware [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.209 189613 DEBUG nova.virt.hardware [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.209 189613 DEBUG nova.virt.hardware [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.215 189613 DEBUG nova.virt.libvirt.vif [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:31:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1817599504',display_name='tempest-TestNetworkBasicOps-server-1817599504',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1817599504',id=13,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxjgPD57z4oLWbCvYm08S4teLLP6YMFhe7N1vCySOSw0r8PANeipdLFxrtEpWlvlsl3n4CumxGvw/N3/4lwCpIrRKebvTxvXnUb0162l2nbPzja+J98zHQ6uddUaIqquQ==',key_name='tempest-TestNetworkBasicOps-527888256',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ac27d3d1c734f4bab455262f79d3106',ramdisk_id='',reservation_id='r-v0cvpy4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488656933',owner_user_name='tempest-TestNetworkBasicOps-488656933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:31:10Z,user_data=None,user_id='1599850e48894151b7909b89547cd9e2',uuid=c6cadef7-2599-4c75-a37d-2d1e6d469a82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "address": "fa:16:3e:8f:79:84", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb21872b-f1", "ovs_interfaceid": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.216 189613 DEBUG nova.network.os_vif_util [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Converting VIF {"id": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "address": "fa:16:3e:8f:79:84", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb21872b-f1", "ovs_interfaceid": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.217 189613 DEBUG nova.network.os_vif_util [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:79:84,bridge_name='br-int',has_traffic_filtering=True,id=cb21872b-f10a-4b2e-887a-5b0c069f8b46,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb21872b-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.219 189613 DEBUG nova.objects.instance [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6cadef7-2599-4c75-a37d-2d1e6d469a82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.233 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:31:17 compute-0 nova_compute[189608]:   <uuid>c6cadef7-2599-4c75-a37d-2d1e6d469a82</uuid>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   <name>instance-0000000d</name>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   <memory>131072</memory>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <nova:name>tempest-TestNetworkBasicOps-server-1817599504</nova:name>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:31:17</nova:creationTime>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <nova:flavor name="m1.nano">
Nov 24 22:31:17 compute-0 nova_compute[189608]:         <nova:memory>128</nova:memory>
Nov 24 22:31:17 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:31:17 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:31:17 compute-0 nova_compute[189608]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 22:31:17 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:31:17 compute-0 nova_compute[189608]:         <nova:user uuid="1599850e48894151b7909b89547cd9e2">tempest-TestNetworkBasicOps-488656933-project-member</nova:user>
Nov 24 22:31:17 compute-0 nova_compute[189608]:         <nova:project uuid="4ac27d3d1c734f4bab455262f79d3106">tempest-TestNetworkBasicOps-488656933</nova:project>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="ec71d7d5-c197-4331-bf8d-e2de71a8419f"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:31:17 compute-0 nova_compute[189608]:         <nova:port uuid="cb21872b-f10a-4b2e-887a-5b0c069f8b46">
Nov 24 22:31:17 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <system>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <entry name="serial">c6cadef7-2599-4c75-a37d-2d1e6d469a82</entry>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <entry name="uuid">c6cadef7-2599-4c75-a37d-2d1e6d469a82</entry>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     </system>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   <os>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   </os>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   <features>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   </features>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk.config"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:8f:79:84"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <target dev="tapcb21872b-f1"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/console.log" append="off"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <video>
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     </video>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:31:17 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:31:17 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:31:17 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:31:17 compute-0 nova_compute[189608]: </domain>
Nov 24 22:31:17 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.235 189613 DEBUG nova.compute.manager [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Preparing to wait for external event network-vif-plugged-cb21872b-f10a-4b2e-887a-5b0c069f8b46 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.235 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.236 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.236 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.237 189613 DEBUG nova.virt.libvirt.vif [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:31:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1817599504',display_name='tempest-TestNetworkBasicOps-server-1817599504',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1817599504',id=13,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxjgPD57z4oLWbCvYm08S4teLLP6YMFhe7N1vCySOSw0r8PANeipdLFxrtEpWlvlsl3n4CumxGvw/N3/4lwCpIrRKebvTxvXnUb0162l2nbPzja+J98zHQ6uddUaIqquQ==',key_name='tempest-TestNetworkBasicOps-527888256',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ac27d3d1c734f4bab455262f79d3106',ramdisk_id='',reservation_id='r-v0cvpy4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488656933',owner_user_name='tempest-TestNetworkBasicOps-488656933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:31:10Z,user_data=None,user_id='1599850e48894151b7909b89547cd9e2',uuid=c6cadef7-2599-4c75-a37d-2d1e6d469a82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "address": "fa:16:3e:8f:79:84", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb21872b-f1", "ovs_interfaceid": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.238 189613 DEBUG nova.network.os_vif_util [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Converting VIF {"id": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "address": "fa:16:3e:8f:79:84", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb21872b-f1", "ovs_interfaceid": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.239 189613 DEBUG nova.network.os_vif_util [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:79:84,bridge_name='br-int',has_traffic_filtering=True,id=cb21872b-f10a-4b2e-887a-5b0c069f8b46,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb21872b-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.239 189613 DEBUG os_vif [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:79:84,bridge_name='br-int',has_traffic_filtering=True,id=cb21872b-f10a-4b2e-887a-5b0c069f8b46,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb21872b-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.241 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.242 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.242 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.249 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.250 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcb21872b-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.251 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcb21872b-f1, col_values=(('external_ids', {'iface-id': 'cb21872b-f10a-4b2e-887a-5b0c069f8b46', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:79:84', 'vm-uuid': 'c6cadef7-2599-4c75-a37d-2d1e6d469a82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.254 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:17 compute-0 NetworkManager[56413]: <info>  [1764023477.2568] manager: (tapcb21872b-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.257 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.269 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.270 189613 INFO os_vif [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:79:84,bridge_name='br-int',has_traffic_filtering=True,id=cb21872b-f10a-4b2e-887a-5b0c069f8b46,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb21872b-f1')
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.338 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.339 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.339 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] No VIF found with MAC fa:16:3e:8f:79:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:31:17 compute-0 nova_compute[189608]: 2025-11-24 22:31:17.340 189613 INFO nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Using config drive
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.215 189613 INFO nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Creating config drive at /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk.config
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.228 189613 DEBUG oslo_concurrency.processutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsndw7w6x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.381 189613 DEBUG oslo_concurrency.processutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsndw7w6x" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:18 compute-0 kernel: tapcb21872b-f1: entered promiscuous mode
Nov 24 22:31:18 compute-0 NetworkManager[56413]: <info>  [1764023478.5015] manager: (tapcb21872b-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Nov 24 22:31:18 compute-0 ovn_controller[97889]: 2025-11-24T22:31:18Z|00145|binding|INFO|Claiming lport cb21872b-f10a-4b2e-887a-5b0c069f8b46 for this chassis.
Nov 24 22:31:18 compute-0 ovn_controller[97889]: 2025-11-24T22:31:18Z|00146|binding|INFO|cb21872b-f10a-4b2e-887a-5b0c069f8b46: Claiming fa:16:3e:8f:79:84 10.100.0.9
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.506 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.532 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:79:84 10.100.0.9'], port_security=['fa:16:3e:8f:79:84 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c6cadef7-2599-4c75-a37d-2d1e6d469a82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c160915-cb1d-4981-a2c7-30899c389f1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac27d3d1c734f4bab455262f79d3106', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'afc1e2a0-6f63-4b55-b0d0-23144ca125aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5128d898-210e-40a7-b165-d9fdd3199b44, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=cb21872b-f10a-4b2e-887a-5b0c069f8b46) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.533 106776 INFO neutron.agent.ovn.metadata.agent [-] Port cb21872b-f10a-4b2e-887a-5b0c069f8b46 in datapath 6c160915-cb1d-4981-a2c7-30899c389f1d bound to our chassis
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.536 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c160915-cb1d-4981-a2c7-30899c389f1d
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.540 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.543 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:18 compute-0 ovn_controller[97889]: 2025-11-24T22:31:18Z|00147|binding|INFO|Setting lport cb21872b-f10a-4b2e-887a-5b0c069f8b46 ovn-installed in OVS
Nov 24 22:31:18 compute-0 ovn_controller[97889]: 2025-11-24T22:31:18Z|00148|binding|INFO|Setting lport cb21872b-f10a-4b2e-887a-5b0c069f8b46 up in Southbound
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.554 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.563 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b54fb4-10f8-4573-9e2f-280e4a84c8e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:18 compute-0 systemd-udevd[253508]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:31:18 compute-0 systemd-machined[155884]: New machine qemu-13-instance-0000000d.
Nov 24 22:31:18 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Nov 24 22:31:18 compute-0 NetworkManager[56413]: <info>  [1764023478.5977] device (tapcb21872b-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:31:18 compute-0 NetworkManager[56413]: <info>  [1764023478.5983] device (tapcb21872b-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.620 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[5526fe85-67f8-4940-adab-056ac7497d7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.625 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb07162-5213-4dc3-a29d-eb594719cd90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.666 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[cedebe8e-efe9-487c-8163-b775e749fec0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.687 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[b947ef47-8330-4db0-8167-014f6c77b39d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c160915-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:57:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525956, 'reachable_time': 29113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253518, 'error': None, 'target': 'ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.715 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[50f67d94-aa9f-465d-ab68-525be98afa04]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c160915-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525973, 'tstamp': 525973}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253522, 'error': None, 'target': 'ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6c160915-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525978, 'tstamp': 525978}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253522, 'error': None, 'target': 'ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.718 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c160915-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.720 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.722 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.722 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c160915-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.722 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.723 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c160915-c0, col_values=(('external_ids', {'iface-id': '5660e6d6-677f-4bf6-8ebf-40ac9c648155'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:18.723 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.809 189613 DEBUG nova.compute.manager [req-5dddefad-73f2-470a-925b-007cc202995f req-c9b6fb41-4a1d-4ce2-bffa-70e483940d47 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Received event network-vif-plugged-cb21872b-f10a-4b2e-887a-5b0c069f8b46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.810 189613 DEBUG oslo_concurrency.lockutils [req-5dddefad-73f2-470a-925b-007cc202995f req-c9b6fb41-4a1d-4ce2-bffa-70e483940d47 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.810 189613 DEBUG oslo_concurrency.lockutils [req-5dddefad-73f2-470a-925b-007cc202995f req-c9b6fb41-4a1d-4ce2-bffa-70e483940d47 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.811 189613 DEBUG oslo_concurrency.lockutils [req-5dddefad-73f2-470a-925b-007cc202995f req-c9b6fb41-4a1d-4ce2-bffa-70e483940d47 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:18 compute-0 nova_compute[189608]: 2025-11-24 22:31:18.811 189613 DEBUG nova.compute.manager [req-5dddefad-73f2-470a-925b-007cc202995f req-c9b6fb41-4a1d-4ce2-bffa-70e483940d47 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Processing event network-vif-plugged-cb21872b-f10a-4b2e-887a-5b0c069f8b46 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.072 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:19 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:19.083 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.258 189613 DEBUG nova.compute.manager [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.261 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023479.2614055, c6cadef7-2599-4c75-a37d-2d1e6d469a82 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.262 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] VM Started (Lifecycle Event)
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.286 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.294 189613 INFO nova.virt.libvirt.driver [-] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Instance spawned successfully.
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.295 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.301 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.308 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.320 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.321 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.322 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.322 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.323 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.323 189613 DEBUG nova.virt.libvirt.driver [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.329 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.330 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023479.2615383, c6cadef7-2599-4c75-a37d-2d1e6d469a82 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.330 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] VM Paused (Lifecycle Event)
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.367 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.374 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023479.2642753, c6cadef7-2599-4c75-a37d-2d1e6d469a82 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.375 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] VM Resumed (Lifecycle Event)
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.385 189613 INFO nova.compute.manager [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Took 8.54 seconds to spawn the instance on the hypervisor.
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.385 189613 DEBUG nova.compute.manager [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.395 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.402 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.415 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.449 189613 INFO nova.compute.manager [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Took 9.13 seconds to build instance.
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.466 189613 DEBUG oslo_concurrency.lockutils [None req-1f359651-6ee5-4c85-82c7-df6bcf9c78dc 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.596 189613 DEBUG nova.network.neutron [req-528cdd34-d27a-4cc5-b1f1-f75efe68520b req-b7312beb-697b-4f6e-a0d8-2183a3c092c8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Updated VIF entry in instance network info cache for port cb21872b-f10a-4b2e-887a-5b0c069f8b46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.598 189613 DEBUG nova.network.neutron [req-528cdd34-d27a-4cc5-b1f1-f75efe68520b req-b7312beb-697b-4f6e-a0d8-2183a3c092c8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Updating instance_info_cache with network_info: [{"id": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "address": "fa:16:3e:8f:79:84", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb21872b-f1", "ovs_interfaceid": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:31:19 compute-0 nova_compute[189608]: 2025-11-24 22:31:19.626 189613 DEBUG oslo_concurrency.lockutils [req-528cdd34-d27a-4cc5-b1f1-f75efe68520b req-b7312beb-697b-4f6e-a0d8-2183a3c092c8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-c6cadef7-2599-4c75-a37d-2d1e6d469a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:31:20 compute-0 nova_compute[189608]: 2025-11-24 22:31:20.975 189613 DEBUG nova.compute.manager [req-2925b214-54f1-4f09-a56d-60598d52b22a req-dc0cf67a-6cfa-4efc-ad53-57cbf923e1cf c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Received event network-vif-plugged-cb21872b-f10a-4b2e-887a-5b0c069f8b46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:31:20 compute-0 nova_compute[189608]: 2025-11-24 22:31:20.976 189613 DEBUG oslo_concurrency.lockutils [req-2925b214-54f1-4f09-a56d-60598d52b22a req-dc0cf67a-6cfa-4efc-ad53-57cbf923e1cf c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:20 compute-0 nova_compute[189608]: 2025-11-24 22:31:20.977 189613 DEBUG oslo_concurrency.lockutils [req-2925b214-54f1-4f09-a56d-60598d52b22a req-dc0cf67a-6cfa-4efc-ad53-57cbf923e1cf c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:20 compute-0 nova_compute[189608]: 2025-11-24 22:31:20.978 189613 DEBUG oslo_concurrency.lockutils [req-2925b214-54f1-4f09-a56d-60598d52b22a req-dc0cf67a-6cfa-4efc-ad53-57cbf923e1cf c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:20 compute-0 nova_compute[189608]: 2025-11-24 22:31:20.979 189613 DEBUG nova.compute.manager [req-2925b214-54f1-4f09-a56d-60598d52b22a req-dc0cf67a-6cfa-4efc-ad53-57cbf923e1cf c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] No waiting events found dispatching network-vif-plugged-cb21872b-f10a-4b2e-887a-5b0c069f8b46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:31:20 compute-0 nova_compute[189608]: 2025-11-24 22:31:20.979 189613 WARNING nova.compute.manager [req-2925b214-54f1-4f09-a56d-60598d52b22a req-dc0cf67a-6cfa-4efc-ad53-57cbf923e1cf c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Received unexpected event network-vif-plugged-cb21872b-f10a-4b2e-887a-5b0c069f8b46 for instance with vm_state active and task_state None.
Nov 24 22:31:21 compute-0 sshd-session[253485]: Invalid user marek from 185.217.1.246 port 26265
Nov 24 22:31:22 compute-0 nova_compute[189608]: 2025-11-24 22:31:22.258 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:22 compute-0 sshd-session[253485]: Disconnecting invalid user marek 185.217.1.246 port 26265: Change of username or service not allowed: (marek,ssh-connection) -> (root2,ssh-connection) [preauth]
Nov 24 22:31:22 compute-0 podman[253532]: 2025-11-24 22:31:22.580639706 +0000 UTC m=+0.116615494 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release-0.7.12=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9)
Nov 24 22:31:22 compute-0 podman[253534]: 2025-11-24 22:31:22.580395969 +0000 UTC m=+0.108443030 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:31:22 compute-0 podman[253533]: 2025-11-24 22:31:22.614912442 +0000 UTC m=+0.153062027 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm)
Nov 24 22:31:24 compute-0 nova_compute[189608]: 2025-11-24 22:31:24.076 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:24 compute-0 nova_compute[189608]: 2025-11-24 22:31:24.604 189613 DEBUG nova.compute.manager [req-890c93cf-a5e1-4b9d-aad1-52d354309860 req-33fa57b4-a1ae-461a-8a6e-b1510a2bdc04 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Received event network-changed-cb21872b-f10a-4b2e-887a-5b0c069f8b46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:31:24 compute-0 nova_compute[189608]: 2025-11-24 22:31:24.605 189613 DEBUG nova.compute.manager [req-890c93cf-a5e1-4b9d-aad1-52d354309860 req-33fa57b4-a1ae-461a-8a6e-b1510a2bdc04 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Refreshing instance network info cache due to event network-changed-cb21872b-f10a-4b2e-887a-5b0c069f8b46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:31:24 compute-0 nova_compute[189608]: 2025-11-24 22:31:24.606 189613 DEBUG oslo_concurrency.lockutils [req-890c93cf-a5e1-4b9d-aad1-52d354309860 req-33fa57b4-a1ae-461a-8a6e-b1510a2bdc04 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-c6cadef7-2599-4c75-a37d-2d1e6d469a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:31:24 compute-0 nova_compute[189608]: 2025-11-24 22:31:24.607 189613 DEBUG oslo_concurrency.lockutils [req-890c93cf-a5e1-4b9d-aad1-52d354309860 req-33fa57b4-a1ae-461a-8a6e-b1510a2bdc04 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-c6cadef7-2599-4c75-a37d-2d1e6d469a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:31:24 compute-0 nova_compute[189608]: 2025-11-24 22:31:24.608 189613 DEBUG nova.network.neutron [req-890c93cf-a5e1-4b9d-aad1-52d354309860 req-33fa57b4-a1ae-461a-8a6e-b1510a2bdc04 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Refreshing network info cache for port cb21872b-f10a-4b2e-887a-5b0c069f8b46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:31:26 compute-0 nova_compute[189608]: 2025-11-24 22:31:26.055 189613 DEBUG oslo_concurrency.lockutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquiring lock "f238e71a-660e-497c-8472-193245387bcf" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:26 compute-0 nova_compute[189608]: 2025-11-24 22:31:26.057 189613 DEBUG oslo_concurrency.lockutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:26 compute-0 nova_compute[189608]: 2025-11-24 22:31:26.058 189613 INFO nova.compute.manager [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Rebooting instance
Nov 24 22:31:26 compute-0 nova_compute[189608]: 2025-11-24 22:31:26.070 189613 DEBUG oslo_concurrency.lockutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquiring lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:31:26 compute-0 nova_compute[189608]: 2025-11-24 22:31:26.071 189613 DEBUG oslo_concurrency.lockutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquired lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:31:26 compute-0 nova_compute[189608]: 2025-11-24 22:31:26.072 189613 DEBUG nova.network.neutron [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:31:26 compute-0 nova_compute[189608]: 2025-11-24 22:31:26.093 189613 DEBUG nova.network.neutron [req-890c93cf-a5e1-4b9d-aad1-52d354309860 req-33fa57b4-a1ae-461a-8a6e-b1510a2bdc04 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Updated VIF entry in instance network info cache for port cb21872b-f10a-4b2e-887a-5b0c069f8b46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:31:26 compute-0 nova_compute[189608]: 2025-11-24 22:31:26.094 189613 DEBUG nova.network.neutron [req-890c93cf-a5e1-4b9d-aad1-52d354309860 req-33fa57b4-a1ae-461a-8a6e-b1510a2bdc04 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Updating instance_info_cache with network_info: [{"id": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "address": "fa:16:3e:8f:79:84", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb21872b-f1", "ovs_interfaceid": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:31:26 compute-0 nova_compute[189608]: 2025-11-24 22:31:26.112 189613 DEBUG oslo_concurrency.lockutils [req-890c93cf-a5e1-4b9d-aad1-52d354309860 req-33fa57b4-a1ae-461a-8a6e-b1510a2bdc04 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-c6cadef7-2599-4c75-a37d-2d1e6d469a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:31:27 compute-0 nova_compute[189608]: 2025-11-24 22:31:27.267 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:27 compute-0 nova_compute[189608]: 2025-11-24 22:31:27.580 189613 DEBUG nova.network.neutron [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Updating instance_info_cache with network_info: [{"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:31:27 compute-0 podman[253588]: 2025-11-24 22:31:27.588414591 +0000 UTC m=+0.131290492 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:31:27 compute-0 nova_compute[189608]: 2025-11-24 22:31:27.601 189613 DEBUG oslo_concurrency.lockutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Releasing lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:31:27 compute-0 nova_compute[189608]: 2025-11-24 22:31:27.603 189613 DEBUG nova.compute.manager [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:31:27 compute-0 kernel: tapfdd48bd9-f9 (unregistering): left promiscuous mode
Nov 24 22:31:27 compute-0 NetworkManager[56413]: <info>  [1764023487.7806] device (tapfdd48bd9-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:31:27 compute-0 nova_compute[189608]: 2025-11-24 22:31:27.814 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:27 compute-0 ovn_controller[97889]: 2025-11-24T22:31:27Z|00149|binding|INFO|Releasing lport fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 from this chassis (sb_readonly=0)
Nov 24 22:31:27 compute-0 ovn_controller[97889]: 2025-11-24T22:31:27Z|00150|binding|INFO|Setting lport fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 down in Southbound
Nov 24 22:31:27 compute-0 ovn_controller[97889]: 2025-11-24T22:31:27Z|00151|binding|INFO|Removing iface tapfdd48bd9-f9 ovn-installed in OVS
Nov 24 22:31:27 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:27.824 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:76:1e 10.100.0.12'], port_security=['fa:16:3e:40:76:1e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f238e71a-660e-497c-8472-193245387bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '97e21ffeec1c4428ba3d70499fc3281f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f7d22eb6-0a82-485c-96cc-cd31ea984470', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a063a9f4-1c3d-438a-9e7c-e5a5c01b330e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:31:27 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:27.827 106776 INFO neutron.agent.ovn.metadata.agent [-] Port fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 in datapath 29585b3c-5eec-4652-ae2f-4aa9ec19d924 unbound from our chassis
Nov 24 22:31:27 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:27.830 106776 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 29585b3c-5eec-4652-ae2f-4aa9ec19d924, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 22:31:27 compute-0 nova_compute[189608]: 2025-11-24 22:31:27.831 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:27 compute-0 nova_compute[189608]: 2025-11-24 22:31:27.837 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:27 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:27.837 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[bba061fb-4849-4d9c-8c8b-fea9da582442]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:27 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:27.840 106776 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924 namespace which is not needed anymore
Nov 24 22:31:27 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 24 22:31:27 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 43.411s CPU time.
Nov 24 22:31:27 compute-0 systemd-machined[155884]: Machine qemu-9-instance-00000009 terminated.
Nov 24 22:31:27 compute-0 nova_compute[189608]: 2025-11-24 22:31:27.967 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:27 compute-0 nova_compute[189608]: 2025-11-24 22:31:27.979 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.015 189613 INFO nova.virt.libvirt.driver [-] [instance: f238e71a-660e-497c-8472-193245387bcf] Instance destroyed successfully.
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.016 189613 DEBUG nova.objects.instance [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lazy-loading 'resources' on Instance uuid f238e71a-660e-497c-8472-193245387bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.030 189613 DEBUG nova.virt.libvirt.vif [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:29:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1585588029',display_name='tempest-ServerActionsTestJSON-server-1585588029',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1585588029',id=9,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLE9GM2vVS7DtVhD6R5uAcKdwWIHZiUIj0cZuSYgN8E0Q128lQ7w/rrfvzePQt5xD3e+tmmR17Qm6/SP88RdZiNDkcZe488bZoDDPSOfWrMiNmhlRVlcu8KaGfz+0SLYw==',key_name='tempest-keypair-731506490',keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:30:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='97e21ffeec1c4428ba3d70499fc3281f',ramdisk_id='',reservation_id='r-0mavx5gw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2097692874',owner_user_name='tempest-ServerActionsTestJSON-2097692874-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:31:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11288fa7771048b4a8faf1d6485ab059',uuid=f238e71a-660e-497c-8472-193245387bcf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.030 189613 DEBUG nova.network.os_vif_util [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Converting VIF {"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.031 189613 DEBUG nova.network.os_vif_util [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.031 189613 DEBUG os_vif [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.033 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.033 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdd48bd9-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.035 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.037 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.039 189613 INFO os_vif [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9')
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.049 189613 DEBUG nova.virt.libvirt.driver [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Start _get_guest_xml network_info=[{"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.055 189613 WARNING nova.virt.libvirt.driver [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.064 189613 DEBUG nova.virt.libvirt.host [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.065 189613 DEBUG nova.virt.libvirt.host [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.073 189613 DEBUG nova.compute.manager [req-18befaec-32ad-466f-851e-fdae740cbc7f req-902929b2-c8a0-4eb5-aaad-13327ba38d7e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received event network-vif-unplugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.073 189613 DEBUG oslo_concurrency.lockutils [req-18befaec-32ad-466f-851e-fdae740cbc7f req-902929b2-c8a0-4eb5-aaad-13327ba38d7e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "f238e71a-660e-497c-8472-193245387bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.074 189613 DEBUG oslo_concurrency.lockutils [req-18befaec-32ad-466f-851e-fdae740cbc7f req-902929b2-c8a0-4eb5-aaad-13327ba38d7e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.074 189613 DEBUG oslo_concurrency.lockutils [req-18befaec-32ad-466f-851e-fdae740cbc7f req-902929b2-c8a0-4eb5-aaad-13327ba38d7e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.074 189613 DEBUG nova.compute.manager [req-18befaec-32ad-466f-851e-fdae740cbc7f req-902929b2-c8a0-4eb5-aaad-13327ba38d7e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] No waiting events found dispatching network-vif-unplugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.074 189613 WARNING nova.compute.manager [req-18befaec-32ad-466f-851e-fdae740cbc7f req-902929b2-c8a0-4eb5-aaad-13327ba38d7e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received unexpected event network-vif-unplugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 for instance with vm_state active and task_state reboot_started_hard.
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.075 189613 DEBUG nova.virt.libvirt.host [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.075 189613 DEBUG nova.virt.libvirt.host [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.076 189613 DEBUG nova.virt.libvirt.driver [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.076 189613 DEBUG nova.virt.hardware [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:28:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a49f1e6c-1051-4dea-812e-0063121444a0',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.076 189613 DEBUG nova.virt.hardware [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.076 189613 DEBUG nova.virt.hardware [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.077 189613 DEBUG nova.virt.hardware [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.077 189613 DEBUG nova.virt.hardware [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.077 189613 DEBUG nova.virt.hardware [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.077 189613 DEBUG nova.virt.hardware [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.077 189613 DEBUG nova.virt.hardware [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.078 189613 DEBUG nova.virt.hardware [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.078 189613 DEBUG nova.virt.hardware [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.078 189613 DEBUG nova.virt.hardware [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.078 189613 DEBUG nova.objects.instance [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lazy-loading 'vcpu_model' on Instance uuid f238e71a-660e-497c-8472-193245387bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:31:28 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[252346]: [NOTICE]   (252350) : haproxy version is 2.8.14-c23fe91
Nov 24 22:31:28 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[252346]: [NOTICE]   (252350) : path to executable is /usr/sbin/haproxy
Nov 24 22:31:28 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[252346]: [WARNING]  (252350) : Exiting Master process...
Nov 24 22:31:28 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[252346]: [WARNING]  (252350) : Exiting Master process...
Nov 24 22:31:28 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[252346]: [ALERT]    (252350) : Current worker (252352) exited with code 143 (Terminated)
Nov 24 22:31:28 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[252346]: [WARNING]  (252350) : All workers exited. Exiting... (0)
Nov 24 22:31:28 compute-0 systemd[1]: libpod-648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b.scope: Deactivated successfully.
Nov 24 22:31:28 compute-0 podman[253636]: 2025-11-24 22:31:28.100176622 +0000 UTC m=+0.103530588 container died 648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.101 189613 DEBUG oslo_concurrency.processutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b-userdata-shm.mount: Deactivated successfully.
Nov 24 22:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-12ed94dc02eadc742cf871717cf4f4e6bb56b5b461328eb566351902d7da9122-merged.mount: Deactivated successfully.
Nov 24 22:31:28 compute-0 podman[253636]: 2025-11-24 22:31:28.151846517 +0000 UTC m=+0.155200453 container cleanup 648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 22:31:28 compute-0 systemd[1]: libpod-conmon-648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b.scope: Deactivated successfully.
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.191 189613 DEBUG oslo_concurrency.processutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk.config --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.192 189613 DEBUG oslo_concurrency.lockutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquiring lock "/var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.193 189613 DEBUG oslo_concurrency.lockutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "/var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.193 189613 DEBUG oslo_concurrency.lockutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "/var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.195 189613 DEBUG nova.virt.libvirt.vif [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:29:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1585588029',display_name='tempest-ServerActionsTestJSON-server-1585588029',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1585588029',id=9,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLE9GM2vVS7DtVhD6R5uAcKdwWIHZiUIj0cZuSYgN8E0Q128lQ7w/rrfvzePQt5xD3e+tmmR17Qm6/SP88RdZiNDkcZe488bZoDDPSOfWrMiNmhlRVlcu8KaGfz+0SLYw==',key_name='tempest-keypair-731506490',keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:30:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='97e21ffeec1c4428ba3d70499fc3281f',ramdisk_id='',reservation_id='r-0mavx5gw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2097692874',owner_user_name='tempest-ServerActionsTestJSON-2097692874-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:31:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11288fa7771048b4a8faf1d6485ab059',uuid=f238e71a-660e-497c-8472-193245387bcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.196 189613 DEBUG nova.network.os_vif_util [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Converting VIF {"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.196 189613 DEBUG nova.network.os_vif_util [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.197 189613 DEBUG nova.objects.instance [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lazy-loading 'pci_devices' on Instance uuid f238e71a-660e-497c-8472-193245387bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.218 189613 DEBUG nova.virt.libvirt.driver [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:31:28 compute-0 nova_compute[189608]:   <uuid>f238e71a-660e-497c-8472-193245387bcf</uuid>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   <name>instance-00000009</name>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   <memory>131072</memory>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <nova:name>tempest-ServerActionsTestJSON-server-1585588029</nova:name>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:31:28</nova:creationTime>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <nova:flavor name="m1.nano">
Nov 24 22:31:28 compute-0 nova_compute[189608]:         <nova:memory>128</nova:memory>
Nov 24 22:31:28 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:31:28 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:31:28 compute-0 nova_compute[189608]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 22:31:28 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:31:28 compute-0 nova_compute[189608]:         <nova:user uuid="11288fa7771048b4a8faf1d6485ab059">tempest-ServerActionsTestJSON-2097692874-project-member</nova:user>
Nov 24 22:31:28 compute-0 nova_compute[189608]:         <nova:project uuid="97e21ffeec1c4428ba3d70499fc3281f">tempest-ServerActionsTestJSON-2097692874</nova:project>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="ec71d7d5-c197-4331-bf8d-e2de71a8419f"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:31:28 compute-0 nova_compute[189608]:         <nova:port uuid="fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13">
Nov 24 22:31:28 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <system>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <entry name="serial">f238e71a-660e-497c-8472-193245387bcf</entry>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <entry name="uuid">f238e71a-660e-497c-8472-193245387bcf</entry>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     </system>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   <os>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   </os>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   <features>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   </features>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk.config"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:40:76:1e"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <target dev="tapfdd48bd9-f9"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/console.log" append="off"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <video>
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     </video>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <input type="keyboard" bus="usb"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:31:28 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:31:28 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:31:28 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:31:28 compute-0 nova_compute[189608]: </domain>
Nov 24 22:31:28 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.228 189613 DEBUG oslo_concurrency.processutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:28 compute-0 podman[253674]: 2025-11-24 22:31:28.247631953 +0000 UTC m=+0.061182131 container remove 648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.255 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[60c1e917-e5ac-4444-ad76-e60007f3d536]: (4, ('Mon Nov 24 10:31:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924 (648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b)\n648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b\nMon Nov 24 10:31:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924 (648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b)\n648d59efe6ca350a93bc942afa146decb101d025ab75b344fbc656f64aa64a4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.257 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[6af51924-f92d-418d-aabf-a79640ca7814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.258 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29585b3c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.261 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:28 compute-0 kernel: tap29585b3c-50: left promiscuous mode
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.278 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.279 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[4c018ea1-53ab-494a-8677-7748bc8c92b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.298 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[25f67a26-a795-47d9-aed9-da4a90633aaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.300 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[09bd780b-92c8-4ecd-8c88-4d2e4d74c210]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.303 189613 DEBUG oslo_concurrency.processutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.303 189613 DEBUG oslo_concurrency.processutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.321 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e7301e-0a16-4f24-b336-0bec16aeba8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525538, 'reachable_time': 24583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253693, 'error': None, 'target': 'ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d29585b3c\x2d5eec\x2d4652\x2dae2f\x2d4aa9ec19d924.mount: Deactivated successfully.
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.326 106891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.327 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[d4bb25f5-a04a-4868-be7b-c2fdbcecc6aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.374 189613 DEBUG oslo_concurrency.processutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.376 189613 DEBUG nova.objects.instance [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lazy-loading 'trusted_certs' on Instance uuid f238e71a-660e-497c-8472-193245387bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.394 189613 DEBUG oslo_concurrency.processutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.455 189613 DEBUG oslo_concurrency.processutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.456 189613 DEBUG nova.virt.disk.api [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Checking if we can resize image /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.456 189613 DEBUG oslo_concurrency.processutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.516 189613 DEBUG oslo_concurrency.processutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.516 189613 DEBUG nova.virt.disk.api [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Cannot resize image /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.517 189613 DEBUG nova.objects.instance [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lazy-loading 'migration_context' on Instance uuid f238e71a-660e-497c-8472-193245387bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.529 189613 DEBUG nova.virt.libvirt.vif [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:29:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1585588029',display_name='tempest-ServerActionsTestJSON-server-1585588029',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1585588029',id=9,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLE9GM2vVS7DtVhD6R5uAcKdwWIHZiUIj0cZuSYgN8E0Q128lQ7w/rrfvzePQt5xD3e+tmmR17Qm6/SP88RdZiNDkcZe488bZoDDPSOfWrMiNmhlRVlcu8KaGfz+0SLYw==',key_name='tempest-keypair-731506490',keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:30:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='97e21ffeec1c4428ba3d70499fc3281f',ramdisk_id='',reservation_id='r-0mavx5gw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2097692874',owner_user_name='tempest-ServerActionsTestJSON-2097692874-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:31:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11288fa7771048b4a8faf1d6485ab059',uuid=f238e71a-660e-497c-8472-193245387bcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.530 189613 DEBUG nova.network.os_vif_util [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Converting VIF {"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.530 189613 DEBUG nova.network.os_vif_util [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.531 189613 DEBUG os_vif [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.531 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.532 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.532 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.535 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.535 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdd48bd9-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.535 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdd48bd9-f9, col_values=(('external_ids', {'iface-id': 'fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:76:1e', 'vm-uuid': 'f238e71a-660e-497c-8472-193245387bcf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:28 compute-0 NetworkManager[56413]: <info>  [1764023488.5385] manager: (tapfdd48bd9-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.539 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.548 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.548 189613 INFO os_vif [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9')
Nov 24 22:31:28 compute-0 kernel: tapfdd48bd9-f9: entered promiscuous mode
Nov 24 22:31:28 compute-0 systemd-udevd[253614]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:31:28 compute-0 ovn_controller[97889]: 2025-11-24T22:31:28Z|00152|binding|INFO|Claiming lport fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 for this chassis.
Nov 24 22:31:28 compute-0 ovn_controller[97889]: 2025-11-24T22:31:28Z|00153|binding|INFO|fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13: Claiming fa:16:3e:40:76:1e 10.100.0.12
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.660 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:28 compute-0 NetworkManager[56413]: <info>  [1764023488.6630] manager: (tapfdd48bd9-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.669 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:76:1e 10.100.0.12'], port_security=['fa:16:3e:40:76:1e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f238e71a-660e-497c-8472-193245387bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '97e21ffeec1c4428ba3d70499fc3281f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f7d22eb6-0a82-485c-96cc-cd31ea984470', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a063a9f4-1c3d-438a-9e7c-e5a5c01b330e, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.670 106776 INFO neutron.agent.ovn.metadata.agent [-] Port fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 in datapath 29585b3c-5eec-4652-ae2f-4aa9ec19d924 bound to our chassis
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.672 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 29585b3c-5eec-4652-ae2f-4aa9ec19d924
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.676 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:28 compute-0 ovn_controller[97889]: 2025-11-24T22:31:28Z|00154|binding|INFO|Setting lport fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 ovn-installed in OVS
Nov 24 22:31:28 compute-0 ovn_controller[97889]: 2025-11-24T22:31:28Z|00155|binding|INFO|Setting lport fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 up in Southbound
Nov 24 22:31:28 compute-0 NetworkManager[56413]: <info>  [1764023488.6856] device (tapfdd48bd9-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.686 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:28 compute-0 NetworkManager[56413]: <info>  [1764023488.6910] device (tapfdd48bd9-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.694 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.696 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3ac780-5744-4519-a723-ee361378d902]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.697 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap29585b3c-51 in ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.699 240020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap29585b3c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.700 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[3ee7b1ee-6f41-4829-af1e-85e280e66db2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.703 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[ad445564-e851-4510-baba-f81480698280]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.719 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[d521a152-950b-4128-89e0-a32c58d5715e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 systemd-machined[155884]: New machine qemu-14-instance-00000009.
Nov 24 22:31:28 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-00000009.
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.758 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[8885ea36-3d7e-4cd9-9799-6c7e64f02327]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.794 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b36f42-3bdb-4194-b765-e5ad2c53e163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 NetworkManager[56413]: <info>  [1764023488.8051] manager: (tap29585b3c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/68)
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.802 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[6f29c13f-5acb-4c7f-889c-77a4e8ebb343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.823 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.856 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.856 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.856 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.862 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[6c51a238-07cb-4d5a-8b8c-cd47d144df2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.866 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd20c33-b923-437a-9f11-48d3ff7cb406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.875 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.876 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.876 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:31:28 compute-0 nova_compute[189608]: 2025-11-24 22:31:28.876 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid f238e71a-660e-497c-8472-193245387bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:31:28 compute-0 NetworkManager[56413]: <info>  [1764023488.8977] device (tap29585b3c-50): carrier: link connected
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.906 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[071fe826-bef2-450e-90a1-a68db55189eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.933 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[8d88dda6-71ed-4c98-80f6-93b2d245b941]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap29585b3c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:b9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533638, 'reachable_time': 15692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253750, 'error': None, 'target': 'ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.953 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[03ad4d8a-8216-4861-bf0d-30f3bb358a1a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe75:b92f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533638, 'tstamp': 533638}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253751, 'error': None, 'target': 'ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:28.980 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[03037152-1a4f-4520-a783-bef42e5f7cbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap29585b3c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:b9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533638, 'reachable_time': 15692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253752, 'error': None, 'target': 'ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:29.033 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[70ec1996-4e90-4a6b-8edb-2d22c6232349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.079 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.108 189613 DEBUG nova.virt.libvirt.host [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Removed pending event for f238e71a-660e-497c-8472-193245387bcf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.108 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023489.1068342, f238e71a-660e-497c-8472-193245387bcf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.108 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] VM Resumed (Lifecycle Event)
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.110 189613 DEBUG nova.compute.manager [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.116 189613 INFO nova.virt.libvirt.driver [-] [instance: f238e71a-660e-497c-8472-193245387bcf] Instance rebooted successfully.
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.116 189613 DEBUG nova.compute.manager [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.139 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:29.140 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[45015687-107a-4117-b818-4f8b173c0cac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:29.143 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29585b3c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:29.144 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:29.145 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29585b3c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.146 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.149 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:29 compute-0 kernel: tap29585b3c-50: entered promiscuous mode
Nov 24 22:31:29 compute-0 NetworkManager[56413]: <info>  [1764023489.1515] manager: (tap29585b3c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.156 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:29.159 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap29585b3c-50, col_values=(('external_ids', {'iface-id': '7dcd4ddb-3860-49b9-87ed-1daf692defef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.161 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:29 compute-0 ovn_controller[97889]: 2025-11-24T22:31:29Z|00156|binding|INFO|Releasing lport 7dcd4ddb-3860-49b9-87ed-1daf692defef from this chassis (sb_readonly=0)
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.176 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.176 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023489.1115909, f238e71a-660e-497c-8472-193245387bcf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.176 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] VM Started (Lifecycle Event)
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.185 189613 DEBUG oslo_concurrency.lockutils [None req-42396ac4-239c-4713-b01e-4bbc96710fc8 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.191 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.196 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:29.198 106776 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/29585b3c-5eec-4652-ae2f-4aa9ec19d924.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/29585b3c-5eec-4652-ae2f-4aa9ec19d924.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:29.200 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[c3010962-dbb4-4993-9310-d30a7b2082e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:29.201 106776 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: global
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     log         /dev/log local0 debug
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     log-tag     haproxy-metadata-proxy-29585b3c-5eec-4652-ae2f-4aa9ec19d924
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     user        root
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     group       root
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     maxconn     1024
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     pidfile     /var/lib/neutron/external/pids/29585b3c-5eec-4652-ae2f-4aa9ec19d924.pid.haproxy
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     daemon
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: defaults
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     log global
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     mode http
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     option httplog
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     option dontlognull
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     option http-server-close
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     option forwardfor
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     retries                 3
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     timeout http-request    30s
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     timeout connect         30s
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     timeout client          32s
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     timeout server          32s
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     timeout http-keep-alive 30s
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: listen listener
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     bind 169.254.169.254:80
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:     http-request add-header X-OVN-Network-ID 29585b3c-5eec-4652-ae2f-4aa9ec19d924
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 22:31:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:29.202 106776 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'env', 'PROCESS_TAG=haproxy-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/29585b3c-5eec-4652-ae2f-4aa9ec19d924.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.206 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:31:29 compute-0 nova_compute[189608]: 2025-11-24 22:31:29.211 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:31:29 compute-0 podman[253770]: 2025-11-24 22:31:29.549173805 +0000 UTC m=+0.102463364 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 22:31:29 compute-0 podman[253769]: 2025-11-24 22:31:29.58473691 +0000 UTC m=+0.140447364 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 22:31:29 compute-0 podman[253829]: 2025-11-24 22:31:29.698414553 +0000 UTC m=+0.066692783 container create 3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:31:29 compute-0 systemd[1]: Started libpod-conmon-3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c.scope.
Nov 24 22:31:29 compute-0 podman[203795]: time="2025-11-24T22:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:31:29 compute-0 podman[253829]: 2025-11-24 22:31:29.662745065 +0000 UTC m=+0.031023315 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 22:31:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 22:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89b0f2899f8766daddc9797cb8372a50627b0f67d92e62aec33eda28b1794e08/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 22:31:29 compute-0 podman[253829]: 2025-11-24 22:31:29.816695358 +0000 UTC m=+0.184973608 container init 3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 24 22:31:29 compute-0 podman[253829]: 2025-11-24 22:31:29.828867956 +0000 UTC m=+0.197146226 container start 3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:31:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31988 "" "Go-http-client/1.1"
Nov 24 22:31:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5736 "" "Go-http-client/1.1"
Nov 24 22:31:29 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[253843]: [NOTICE]   (253847) : New worker (253849) forked
Nov 24 22:31:29 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[253843]: [NOTICE]   (253847) : Loading success.
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.217 189613 DEBUG nova.compute.manager [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received event network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.218 189613 DEBUG oslo_concurrency.lockutils [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "f238e71a-660e-497c-8472-193245387bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.218 189613 DEBUG oslo_concurrency.lockutils [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.219 189613 DEBUG oslo_concurrency.lockutils [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.220 189613 DEBUG nova.compute.manager [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] No waiting events found dispatching network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.220 189613 WARNING nova.compute.manager [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received unexpected event network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 for instance with vm_state active and task_state None.
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.220 189613 DEBUG nova.compute.manager [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received event network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.221 189613 DEBUG oslo_concurrency.lockutils [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "f238e71a-660e-497c-8472-193245387bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.221 189613 DEBUG oslo_concurrency.lockutils [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.221 189613 DEBUG oslo_concurrency.lockutils [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.221 189613 DEBUG nova.compute.manager [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] No waiting events found dispatching network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.221 189613 WARNING nova.compute.manager [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received unexpected event network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 for instance with vm_state active and task_state None.
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.222 189613 DEBUG nova.compute.manager [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received event network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.222 189613 DEBUG oslo_concurrency.lockutils [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "f238e71a-660e-497c-8472-193245387bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.222 189613 DEBUG oslo_concurrency.lockutils [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.222 189613 DEBUG oslo_concurrency.lockutils [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.223 189613 DEBUG nova.compute.manager [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] No waiting events found dispatching network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.223 189613 WARNING nova.compute.manager [req-47f1e52a-9a90-42dd-964a-c4b082ebf1c4 req-0bd3f3f0-7158-4eee-baa8-eeb77b31ef40 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received unexpected event network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 for instance with vm_state active and task_state None.
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.812 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] Updating instance_info_cache with network_info: [{"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.827 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-f238e71a-660e-497c-8472-193245387bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:31:30 compute-0 nova_compute[189608]: 2025-11-24 22:31:30.828 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:31:31 compute-0 openstack_network_exporter[205945]: ERROR   22:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:31:31 compute-0 openstack_network_exporter[205945]: ERROR   22:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:31:31 compute-0 openstack_network_exporter[205945]: ERROR   22:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:31:31 compute-0 openstack_network_exporter[205945]: ERROR   22:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:31:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:31:31 compute-0 openstack_network_exporter[205945]: ERROR   22:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:31:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:31:31 compute-0 sshd-session[253587]: Invalid user root2 from 185.217.1.246 port 64595
Nov 24 22:31:32 compute-0 sshd-session[253587]: Disconnecting invalid user root2 185.217.1.246 port 64595: Change of username or service not allowed: (root2,ssh-connection) -> (vncuser,ssh-connection) [preauth]
Nov 24 22:31:33 compute-0 nova_compute[189608]: 2025-11-24 22:31:33.539 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:33 compute-0 nova_compute[189608]: 2025-11-24 22:31:33.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:31:33 compute-0 nova_compute[189608]: 2025-11-24 22:31:33.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:33 compute-0 nova_compute[189608]: 2025-11-24 22:31:33.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:33 compute-0 nova_compute[189608]: 2025-11-24 22:31:33.823 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:33 compute-0 nova_compute[189608]: 2025-11-24 22:31:33.823 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:31:33 compute-0 nova_compute[189608]: 2025-11-24 22:31:33.927 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.003 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.004 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.070 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.083 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.104 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.153 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.155 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.231 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.245 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.320 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.322 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.395 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.405 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.485 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.486 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:31:34 compute-0 nova_compute[189608]: 2025-11-24 22:31:34.558 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.169 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.170 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4719MB free_disk=72.04025268554688GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.171 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.171 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.299 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance f238e71a-660e-497c-8472-193245387bcf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.300 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance cf45f1e3-b80d-4213-80aa-995f57a9a476 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.300 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.300 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance c6cadef7-2599-4c75-a37d-2d1e6d469a82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.300 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.301 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.424 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.446 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.475 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:31:35 compute-0 nova_compute[189608]: 2025-11-24 22:31:35.475 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:38 compute-0 sshd-session[253884]: Invalid user vncuser from 185.217.1.246 port 29884
Nov 24 22:31:38 compute-0 nova_compute[189608]: 2025-11-24 22:31:38.476 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:31:38 compute-0 nova_compute[189608]: 2025-11-24 22:31:38.543 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:38 compute-0 sshd-session[253884]: Disconnecting invalid user vncuser 185.217.1.246 port 29884: Change of username or service not allowed: (vncuser,ssh-connection) -> (publicuser,ssh-connection) [preauth]
Nov 24 22:31:39 compute-0 nova_compute[189608]: 2025-11-24 22:31:39.087 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:39 compute-0 podman[253886]: 2025-11-24 22:31:39.601209485 +0000 UTC m=+0.140392013 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:31:39 compute-0 nova_compute[189608]: 2025-11-24 22:31:39.791 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:31:39 compute-0 nova_compute[189608]: 2025-11-24 22:31:39.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:31:41 compute-0 nova_compute[189608]: 2025-11-24 22:31:41.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:31:42 compute-0 nova_compute[189608]: 2025-11-24 22:31:42.797 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:31:42 compute-0 nova_compute[189608]: 2025-11-24 22:31:42.798 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:31:43 compute-0 nova_compute[189608]: 2025-11-24 22:31:43.548 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:44 compute-0 nova_compute[189608]: 2025-11-24 22:31:44.092 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:44 compute-0 podman[253911]: 2025-11-24 22:31:44.568461989 +0000 UTC m=+0.113398375 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:31:45 compute-0 sshd-session[253909]: Invalid user publicuser from 185.217.1.246 port 58469
Nov 24 22:31:45 compute-0 podman[253929]: 2025-11-24 22:31:45.98614728 +0000 UTC m=+0.120746113 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 24 22:31:46 compute-0 nova_compute[189608]: 2025-11-24 22:31:46.474 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:46 compute-0 sshd-session[253909]: Disconnecting invalid user publicuser 185.217.1.246 port 58469: Change of username or service not allowed: (publicuser,ssh-connection) -> (netlink,ssh-connection) [preauth]
Nov 24 22:31:47 compute-0 nova_compute[189608]: 2025-11-24 22:31:47.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:31:47 compute-0 nova_compute[189608]: 2025-11-24 22:31:47.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:31:48 compute-0 nova_compute[189608]: 2025-11-24 22:31:48.554 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:49 compute-0 nova_compute[189608]: 2025-11-24 22:31:49.096 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:51 compute-0 nova_compute[189608]: 2025-11-24 22:31:51.683 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:52 compute-0 sshd-session[253947]: Invalid user netlink from 185.217.1.246 port 19759
Nov 24 22:31:53 compute-0 nova_compute[189608]: 2025-11-24 22:31:53.557 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:53 compute-0 podman[253985]: 2025-11-24 22:31:53.561326888 +0000 UTC m=+0.104399466 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Nov 24 22:31:53 compute-0 podman[253984]: 2025-11-24 22:31:53.606875683 +0000 UTC m=+0.143920633 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.4, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 24 22:31:53 compute-0 podman[253986]: 2025-11-24 22:31:53.621778585 +0000 UTC m=+0.146133230 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:31:53 compute-0 sshd-session[253947]: Disconnecting invalid user netlink 185.217.1.246 port 19759: Change of username or service not allowed: (netlink,ssh-connection) -> (auditadm,ssh-connection) [preauth]
Nov 24 22:31:54 compute-0 nova_compute[189608]: 2025-11-24 22:31:54.100 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:54 compute-0 ovn_controller[97889]: 2025-11-24T22:31:54Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:79:84 10.100.0.9
Nov 24 22:31:54 compute-0 ovn_controller[97889]: 2025-11-24T22:31:54Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:79:84 10.100.0.9
Nov 24 22:31:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:54.597 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:31:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:54.598 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:31:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:31:54.599 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:31:56 compute-0 nova_compute[189608]: 2025-11-24 22:31:56.621 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:58 compute-0 podman[254043]: 2025-11-24 22:31:58.537518259 +0000 UTC m=+0.071942987 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:31:58 compute-0 nova_compute[189608]: 2025-11-24 22:31:58.561 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:59 compute-0 nova_compute[189608]: 2025-11-24 22:31:59.101 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:31:59 compute-0 podman[203795]: time="2025-11-24T22:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:31:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31991 "" "Go-http-client/1.1"
Nov 24 22:31:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5727 "" "Go-http-client/1.1"
Nov 24 22:32:00 compute-0 podman[254076]: 2025-11-24 22:32:00.526516401 +0000 UTC m=+0.077522090 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 22:32:00 compute-0 sshd-session[254041]: Invalid user auditadm from 185.217.1.246 port 43515
Nov 24 22:32:00 compute-0 podman[254075]: 2025-11-24 22:32:00.579039482 +0000 UTC m=+0.138062490 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:32:01 compute-0 openstack_network_exporter[205945]: ERROR   22:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:32:01 compute-0 openstack_network_exporter[205945]: ERROR   22:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:32:01 compute-0 openstack_network_exporter[205945]: ERROR   22:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:32:01 compute-0 openstack_network_exporter[205945]: ERROR   22:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:32:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:32:01 compute-0 openstack_network_exporter[205945]: ERROR   22:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:32:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:32:01 compute-0 nova_compute[189608]: 2025-11-24 22:32:01.705 189613 INFO nova.compute.manager [None req-b0a0a25f-d9ed-4cc3-9240-f0455ead36ff 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Get console output
Nov 24 22:32:01 compute-0 nova_compute[189608]: 2025-11-24 22:32:01.725 239876 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.023 189613 DEBUG oslo_concurrency.lockutils [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.028 189613 DEBUG oslo_concurrency.lockutils [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.030 189613 DEBUG oslo_concurrency.lockutils [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.032 189613 DEBUG oslo_concurrency.lockutils [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.034 189613 DEBUG oslo_concurrency.lockutils [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.037 189613 INFO nova.compute.manager [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Terminating instance
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.038 189613 DEBUG nova.compute.manager [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:32:02 compute-0 kernel: tapcb21872b-f1 (unregistering): left promiscuous mode
Nov 24 22:32:02 compute-0 NetworkManager[56413]: <info>  [1764023522.4194] device (tapcb21872b-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:32:02 compute-0 ovn_controller[97889]: 2025-11-24T22:32:02Z|00157|binding|INFO|Releasing lport cb21872b-f10a-4b2e-887a-5b0c069f8b46 from this chassis (sb_readonly=0)
Nov 24 22:32:02 compute-0 ovn_controller[97889]: 2025-11-24T22:32:02Z|00158|binding|INFO|Setting lport cb21872b-f10a-4b2e-887a-5b0c069f8b46 down in Southbound
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.438 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:02 compute-0 ovn_controller[97889]: 2025-11-24T22:32:02Z|00159|binding|INFO|Removing iface tapcb21872b-f1 ovn-installed in OVS
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.444 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.458 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:79:84 10.100.0.9'], port_security=['fa:16:3e:8f:79:84 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c6cadef7-2599-4c75-a37d-2d1e6d469a82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c160915-cb1d-4981-a2c7-30899c389f1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac27d3d1c734f4bab455262f79d3106', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'afc1e2a0-6f63-4b55-b0d0-23144ca125aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5128d898-210e-40a7-b165-d9fdd3199b44, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=cb21872b-f10a-4b2e-887a-5b0c069f8b46) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.461 106776 INFO neutron.agent.ovn.metadata.agent [-] Port cb21872b-f10a-4b2e-887a-5b0c069f8b46 in datapath 6c160915-cb1d-4981-a2c7-30899c389f1d unbound from our chassis
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.465 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c160915-cb1d-4981-a2c7-30899c389f1d
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.489 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.497 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3a9a76-bf59-4e31-bf75-df51b9a16441]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:02 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 24 22:32:02 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 37.777s CPU time.
Nov 24 22:32:02 compute-0 systemd-machined[155884]: Machine qemu-13-instance-0000000d terminated.
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.558 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[bda37dee-f262-404c-9b2b-5b39295740e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.562 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[50480ddd-66c6-4890-bea9-70f4ed798a54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.620 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[544cf38c-57a2-4067-8deb-4bcfe2bc03f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.651 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[32509877-db5a-4bee-98dc-3f5e6b7cf3de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c160915-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:57:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525956, 'reachable_time': 29113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254145, 'error': None, 'target': 'ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.670 189613 INFO nova.virt.libvirt.driver [-] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Instance destroyed successfully.
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.671 189613 DEBUG nova.objects.instance [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lazy-loading 'resources' on Instance uuid c6cadef7-2599-4c75-a37d-2d1e6d469a82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.679 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7d45a2-d5bd-48e5-952d-584761e3d7c8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c160915-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525973, 'tstamp': 525973}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254153, 'error': None, 'target': 'ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6c160915-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525978, 'tstamp': 525978}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254153, 'error': None, 'target': 'ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.682 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c160915-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.683 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.691 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.692 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c160915-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.692 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.692 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c160915-c0, col_values=(('external_ids', {'iface-id': '5660e6d6-677f-4bf6-8ebf-40ac9c648155'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:02 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:02.693 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.694 189613 DEBUG nova.virt.libvirt.vif [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:31:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1817599504',display_name='tempest-TestNetworkBasicOps-server-1817599504',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1817599504',id=13,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxjgPD57z4oLWbCvYm08S4teLLP6YMFhe7N1vCySOSw0r8PANeipdLFxrtEpWlvlsl3n4CumxGvw/N3/4lwCpIrRKebvTxvXnUb0162l2nbPzja+J98zHQ6uddUaIqquQ==',key_name='tempest-TestNetworkBasicOps-527888256',keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:31:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ac27d3d1c734f4bab455262f79d3106',ramdisk_id='',reservation_id='r-v0cvpy4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488656933',owner_user_name='tempest-TestNetworkBasicOps-488656933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:31:19Z,user_data=None,user_id='1599850e48894151b7909b89547cd9e2',uuid=c6cadef7-2599-4c75-a37d-2d1e6d469a82,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "address": "fa:16:3e:8f:79:84", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb21872b-f1", "ovs_interfaceid": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.695 189613 DEBUG nova.network.os_vif_util [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Converting VIF {"id": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "address": "fa:16:3e:8f:79:84", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb21872b-f1", "ovs_interfaceid": "cb21872b-f10a-4b2e-887a-5b0c069f8b46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.696 189613 DEBUG nova.network.os_vif_util [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:79:84,bridge_name='br-int',has_traffic_filtering=True,id=cb21872b-f10a-4b2e-887a-5b0c069f8b46,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb21872b-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.697 189613 DEBUG os_vif [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:79:84,bridge_name='br-int',has_traffic_filtering=True,id=cb21872b-f10a-4b2e-887a-5b0c069f8b46,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb21872b-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.699 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.700 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb21872b-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.702 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.705 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.709 189613 INFO os_vif [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:79:84,bridge_name='br-int',has_traffic_filtering=True,id=cb21872b-f10a-4b2e-887a-5b0c069f8b46,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb21872b-f1')
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.709 189613 INFO nova.virt.libvirt.driver [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Deleting instance files /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82_del
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.711 189613 INFO nova.virt.libvirt.driver [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Deletion of /var/lib/nova/instances/c6cadef7-2599-4c75-a37d-2d1e6d469a82_del complete
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.761 189613 DEBUG nova.compute.manager [req-3bce9ec3-95b6-4013-b7ac-881a4f5f247d req-164cfb95-8f1f-46fe-8cc9-ffc4f1ce9bbc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Received event network-vif-unplugged-cb21872b-f10a-4b2e-887a-5b0c069f8b46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.762 189613 DEBUG oslo_concurrency.lockutils [req-3bce9ec3-95b6-4013-b7ac-881a4f5f247d req-164cfb95-8f1f-46fe-8cc9-ffc4f1ce9bbc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.762 189613 DEBUG oslo_concurrency.lockutils [req-3bce9ec3-95b6-4013-b7ac-881a4f5f247d req-164cfb95-8f1f-46fe-8cc9-ffc4f1ce9bbc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.763 189613 DEBUG oslo_concurrency.lockutils [req-3bce9ec3-95b6-4013-b7ac-881a4f5f247d req-164cfb95-8f1f-46fe-8cc9-ffc4f1ce9bbc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.763 189613 DEBUG nova.compute.manager [req-3bce9ec3-95b6-4013-b7ac-881a4f5f247d req-164cfb95-8f1f-46fe-8cc9-ffc4f1ce9bbc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] No waiting events found dispatching network-vif-unplugged-cb21872b-f10a-4b2e-887a-5b0c069f8b46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.763 189613 DEBUG nova.compute.manager [req-3bce9ec3-95b6-4013-b7ac-881a4f5f247d req-164cfb95-8f1f-46fe-8cc9-ffc4f1ce9bbc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Received event network-vif-unplugged-cb21872b-f10a-4b2e-887a-5b0c069f8b46 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.770 189613 INFO nova.compute.manager [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.770 189613 DEBUG oslo.service.loopingcall [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.771 189613 DEBUG nova.compute.manager [-] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.771 189613 DEBUG nova.network.neutron [-] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:32:02 compute-0 nova_compute[189608]: 2025-11-24 22:32:02.973 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:03 compute-0 sshd-session[254041]: Disconnecting invalid user auditadm 185.217.1.246 port 43515: Change of username or service not allowed: (auditadm,ssh-connection) -> (ftp_inst,ssh-connection) [preauth]
Nov 24 22:32:03 compute-0 nova_compute[189608]: 2025-11-24 22:32:03.705 189613 DEBUG nova.network.neutron [-] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:32:03 compute-0 nova_compute[189608]: 2025-11-24 22:32:03.744 189613 INFO nova.compute.manager [-] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Took 0.97 seconds to deallocate network for instance.
Nov 24 22:32:03 compute-0 nova_compute[189608]: 2025-11-24 22:32:03.787 189613 DEBUG oslo_concurrency.lockutils [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:03 compute-0 nova_compute[189608]: 2025-11-24 22:32:03.788 189613 DEBUG oslo_concurrency.lockutils [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:03 compute-0 ovn_controller[97889]: 2025-11-24T22:32:03Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:76:1e 10.100.0.12
Nov 24 22:32:03 compute-0 nova_compute[189608]: 2025-11-24 22:32:03.935 189613 DEBUG nova.compute.provider_tree [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:32:03 compute-0 nova_compute[189608]: 2025-11-24 22:32:03.953 189613 DEBUG nova.scheduler.client.report [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:32:03 compute-0 nova_compute[189608]: 2025-11-24 22:32:03.983 189613 DEBUG oslo_concurrency.lockutils [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:04 compute-0 nova_compute[189608]: 2025-11-24 22:32:04.017 189613 INFO nova.scheduler.client.report [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Deleted allocations for instance c6cadef7-2599-4c75-a37d-2d1e6d469a82
Nov 24 22:32:04 compute-0 nova_compute[189608]: 2025-11-24 22:32:04.097 189613 DEBUG oslo_concurrency.lockutils [None req-430f6103-819b-454d-b3f0-457df7b1809c 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:04 compute-0 nova_compute[189608]: 2025-11-24 22:32:04.105 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:04 compute-0 nova_compute[189608]: 2025-11-24 22:32:04.872 189613 DEBUG nova.compute.manager [req-d74055f1-e842-4488-acfb-2197eb748f68 req-ae7104e6-8d54-42eb-9d04-4df6090fe814 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Received event network-vif-plugged-cb21872b-f10a-4b2e-887a-5b0c069f8b46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:32:04 compute-0 nova_compute[189608]: 2025-11-24 22:32:04.873 189613 DEBUG oslo_concurrency.lockutils [req-d74055f1-e842-4488-acfb-2197eb748f68 req-ae7104e6-8d54-42eb-9d04-4df6090fe814 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:04 compute-0 nova_compute[189608]: 2025-11-24 22:32:04.873 189613 DEBUG oslo_concurrency.lockutils [req-d74055f1-e842-4488-acfb-2197eb748f68 req-ae7104e6-8d54-42eb-9d04-4df6090fe814 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:04 compute-0 nova_compute[189608]: 2025-11-24 22:32:04.874 189613 DEBUG oslo_concurrency.lockutils [req-d74055f1-e842-4488-acfb-2197eb748f68 req-ae7104e6-8d54-42eb-9d04-4df6090fe814 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "c6cadef7-2599-4c75-a37d-2d1e6d469a82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:04 compute-0 nova_compute[189608]: 2025-11-24 22:32:04.874 189613 DEBUG nova.compute.manager [req-d74055f1-e842-4488-acfb-2197eb748f68 req-ae7104e6-8d54-42eb-9d04-4df6090fe814 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] No waiting events found dispatching network-vif-plugged-cb21872b-f10a-4b2e-887a-5b0c069f8b46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:32:04 compute-0 nova_compute[189608]: 2025-11-24 22:32:04.874 189613 WARNING nova.compute.manager [req-d74055f1-e842-4488-acfb-2197eb748f68 req-ae7104e6-8d54-42eb-9d04-4df6090fe814 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Received unexpected event network-vif-plugged-cb21872b-f10a-4b2e-887a-5b0c069f8b46 for instance with vm_state deleted and task_state None.
Nov 24 22:32:04 compute-0 nova_compute[189608]: 2025-11-24 22:32:04.874 189613 DEBUG nova.compute.manager [req-d74055f1-e842-4488-acfb-2197eb748f68 req-ae7104e6-8d54-42eb-9d04-4df6090fe814 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Received event network-vif-deleted-cb21872b-f10a-4b2e-887a-5b0c069f8b46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:32:05 compute-0 nova_compute[189608]: 2025-11-24 22:32:05.905 189613 DEBUG oslo_concurrency.lockutils [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "cf45f1e3-b80d-4213-80aa-995f57a9a476" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:05 compute-0 nova_compute[189608]: 2025-11-24 22:32:05.906 189613 DEBUG oslo_concurrency.lockutils [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:05 compute-0 nova_compute[189608]: 2025-11-24 22:32:05.907 189613 DEBUG oslo_concurrency.lockutils [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:05 compute-0 nova_compute[189608]: 2025-11-24 22:32:05.908 189613 DEBUG oslo_concurrency.lockutils [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:05 compute-0 nova_compute[189608]: 2025-11-24 22:32:05.909 189613 DEBUG oslo_concurrency.lockutils [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:05 compute-0 nova_compute[189608]: 2025-11-24 22:32:05.911 189613 INFO nova.compute.manager [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Terminating instance
Nov 24 22:32:05 compute-0 nova_compute[189608]: 2025-11-24 22:32:05.913 189613 DEBUG nova.compute.manager [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:32:05 compute-0 kernel: tap8f00051e-bd (unregistering): left promiscuous mode
Nov 24 22:32:05 compute-0 NetworkManager[56413]: <info>  [1764023525.9659] device (tap8f00051e-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:32:05 compute-0 nova_compute[189608]: 2025-11-24 22:32:05.983 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:05 compute-0 ovn_controller[97889]: 2025-11-24T22:32:05Z|00160|binding|INFO|Releasing lport 8f00051e-bd87-48eb-aba6-5dbf3d527aef from this chassis (sb_readonly=0)
Nov 24 22:32:05 compute-0 ovn_controller[97889]: 2025-11-24T22:32:05Z|00161|binding|INFO|Setting lport 8f00051e-bd87-48eb-aba6-5dbf3d527aef down in Southbound
Nov 24 22:32:05 compute-0 ovn_controller[97889]: 2025-11-24T22:32:05Z|00162|binding|INFO|Removing iface tap8f00051e-bd ovn-installed in OVS
Nov 24 22:32:05 compute-0 nova_compute[189608]: 2025-11-24 22:32:05.993 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.006 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:89:1f 10.100.0.11'], port_security=['fa:16:3e:d2:89:1f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cf45f1e3-b80d-4213-80aa-995f57a9a476', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c160915-cb1d-4981-a2c7-30899c389f1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac27d3d1c734f4bab455262f79d3106', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9be940b5-626b-4f2c-8cdc-0aa939d2b4a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5128d898-210e-40a7-b165-d9fdd3199b44, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=8f00051e-bd87-48eb-aba6-5dbf3d527aef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.007 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 8f00051e-bd87-48eb-aba6-5dbf3d527aef in datapath 6c160915-cb1d-4981-a2c7-30899c389f1d unbound from our chassis
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.011 106776 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c160915-cb1d-4981-a2c7-30899c389f1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.012 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[e8015643-c71e-477c-9088-cc8d26264b7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.013 106776 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d namespace which is not needed anymore
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.019 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:06 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 24 22:32:06 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 45.782s CPU time.
Nov 24 22:32:06 compute-0 systemd-machined[155884]: Machine qemu-10-instance-0000000a terminated.
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.189 189613 INFO nova.virt.libvirt.driver [-] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Instance destroyed successfully.
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.189 189613 DEBUG nova.objects.instance [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lazy-loading 'resources' on Instance uuid cf45f1e3-b80d-4213-80aa-995f57a9a476 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.200 189613 DEBUG nova.virt.libvirt.vif [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:30:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-501882066',display_name='tempest-TestNetworkBasicOps-server-501882066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-501882066',id=10,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPlZjxuKSGTmxgVDchYk+GJcLGRXvs9CnKpiEnZ/PwqWKNeGx51EqI/uX1m3Drik1zAThCC+0gOJLoaHRaz7LgOa+K81EwBXRWqudbIpt61K0/Cg/CZImZCe2iCDs0sZJg==',key_name='tempest-TestNetworkBasicOps-1667007681',keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:30:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ac27d3d1c734f4bab455262f79d3106',ramdisk_id='',reservation_id='r-hxvr2jjm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488656933',owner_user_name='tempest-TestNetworkBasicOps-488656933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:30:17Z,user_data=None,user_id='1599850e48894151b7909b89547cd9e2',uuid=cf45f1e3-b80d-4213-80aa-995f57a9a476,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "address": "fa:16:3e:d2:89:1f", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f00051e-bd", "ovs_interfaceid": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.201 189613 DEBUG nova.network.os_vif_util [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Converting VIF {"id": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "address": "fa:16:3e:d2:89:1f", "network": {"id": "6c160915-cb1d-4981-a2c7-30899c389f1d", "bridge": "br-int", "label": "tempest-network-smoke--162777084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac27d3d1c734f4bab455262f79d3106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f00051e-bd", "ovs_interfaceid": "8f00051e-bd87-48eb-aba6-5dbf3d527aef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.202 189613 DEBUG nova.network.os_vif_util [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:89:1f,bridge_name='br-int',has_traffic_filtering=True,id=8f00051e-bd87-48eb-aba6-5dbf3d527aef,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f00051e-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.203 189613 DEBUG os_vif [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:89:1f,bridge_name='br-int',has_traffic_filtering=True,id=8f00051e-bd87-48eb-aba6-5dbf3d527aef,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f00051e-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.205 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.206 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f00051e-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.208 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.210 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.215 189613 INFO os_vif [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:89:1f,bridge_name='br-int',has_traffic_filtering=True,id=8f00051e-bd87-48eb-aba6-5dbf3d527aef,network=Network(6c160915-cb1d-4981-a2c7-30899c389f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f00051e-bd')
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.216 189613 INFO nova.virt.libvirt.driver [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Deleting instance files /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476_del
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.217 189613 INFO nova.virt.libvirt.driver [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Deletion of /var/lib/nova/instances/cf45f1e3-b80d-4213-80aa-995f57a9a476_del complete
Nov 24 22:32:06 compute-0 neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d[252488]: [NOTICE]   (252492) : haproxy version is 2.8.14-c23fe91
Nov 24 22:32:06 compute-0 neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d[252488]: [NOTICE]   (252492) : path to executable is /usr/sbin/haproxy
Nov 24 22:32:06 compute-0 neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d[252488]: [WARNING]  (252492) : Exiting Master process...
Nov 24 22:32:06 compute-0 neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d[252488]: [ALERT]    (252492) : Current worker (252494) exited with code 143 (Terminated)
Nov 24 22:32:06 compute-0 neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d[252488]: [WARNING]  (252492) : All workers exited. Exiting... (0)
Nov 24 22:32:06 compute-0 systemd[1]: libpod-7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17.scope: Deactivated successfully.
Nov 24 22:32:06 compute-0 podman[254196]: 2025-11-24 22:32:06.281497281 +0000 UTC m=+0.073733611 container died 7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.289 189613 INFO nova.compute.manager [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Took 0.38 seconds to destroy the instance on the hypervisor.
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.291 189613 DEBUG oslo.service.loopingcall [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.293 189613 DEBUG nova.compute.manager [-] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.293 189613 DEBUG nova.network.neutron [-] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:32:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17-userdata-shm.mount: Deactivated successfully.
Nov 24 22:32:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd13bd9b25c27a95077c93c5289af0a516146f8bc40d1dcf25c3c82451e63854-merged.mount: Deactivated successfully.
Nov 24 22:32:06 compute-0 podman[254196]: 2025-11-24 22:32:06.346707818 +0000 UTC m=+0.138944128 container cleanup 7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 22:32:06 compute-0 systemd[1]: libpod-conmon-7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17.scope: Deactivated successfully.
Nov 24 22:32:06 compute-0 podman[254226]: 2025-11-24 22:32:06.447063296 +0000 UTC m=+0.072431391 container remove 7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.462 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f074df-762f-4bfd-be69-dff0b3b35d9d]: (4, ('Mon Nov 24 10:32:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d (7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17)\n7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17\nMon Nov 24 10:32:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d (7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17)\n7581f0474dae89f2cb8063700043c14c424167d8202319b4d50a3989fa6d4a17\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.464 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6d036b-703c-47c9-879e-6711f99fb927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.465 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c160915-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:06 compute-0 kernel: tap6c160915-c0: left promiscuous mode
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.480 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[715c82b9-ceec-4374-b7c7-e08db12d206c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.495 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.510 189613 DEBUG nova.compute.manager [req-8e1aefc5-0b49-4440-a4cc-3107ad935cc2 req-48865b65-c389-4759-aefb-df23eaa45045 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Received event network-vif-unplugged-8f00051e-bd87-48eb-aba6-5dbf3d527aef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.510 189613 DEBUG oslo_concurrency.lockutils [req-8e1aefc5-0b49-4440-a4cc-3107ad935cc2 req-48865b65-c389-4759-aefb-df23eaa45045 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.511 189613 DEBUG oslo_concurrency.lockutils [req-8e1aefc5-0b49-4440-a4cc-3107ad935cc2 req-48865b65-c389-4759-aefb-df23eaa45045 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.511 189613 DEBUG oslo_concurrency.lockutils [req-8e1aefc5-0b49-4440-a4cc-3107ad935cc2 req-48865b65-c389-4759-aefb-df23eaa45045 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.512 189613 DEBUG nova.compute.manager [req-8e1aefc5-0b49-4440-a4cc-3107ad935cc2 req-48865b65-c389-4759-aefb-df23eaa45045 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] No waiting events found dispatching network-vif-unplugged-8f00051e-bd87-48eb-aba6-5dbf3d527aef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.512 189613 DEBUG nova.compute.manager [req-8e1aefc5-0b49-4440-a4cc-3107ad935cc2 req-48865b65-c389-4759-aefb-df23eaa45045 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Received event network-vif-unplugged-8f00051e-bd87-48eb-aba6-5dbf3d527aef for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 22:32:06 compute-0 nova_compute[189608]: 2025-11-24 22:32:06.512 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.512 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[fe676b03-553c-48a3-97ea-03973cfe8b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.515 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[a1504009-5189-4aac-87a1-099f89ae1430]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.531 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[82310f9d-8521-4a78-abec-5d9983bad72b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525946, 'reachable_time': 21234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254241, 'error': None, 'target': 'ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c160915\x2dcb1d\x2d4981\x2da2c7\x2d30899c389f1d.mount: Deactivated successfully.
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.535 106891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c160915-cb1d-4981-a2c7-30899c389f1d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 22:32:06 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:06.536 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[d6570dae-20ed-4037-8461-fb88b68eca85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:07 compute-0 nova_compute[189608]: 2025-11-24 22:32:07.378 189613 DEBUG nova.network.neutron [-] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:32:07 compute-0 nova_compute[189608]: 2025-11-24 22:32:07.403 189613 INFO nova.compute.manager [-] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Took 1.11 seconds to deallocate network for instance.
Nov 24 22:32:07 compute-0 nova_compute[189608]: 2025-11-24 22:32:07.460 189613 DEBUG oslo_concurrency.lockutils [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:07 compute-0 nova_compute[189608]: 2025-11-24 22:32:07.460 189613 DEBUG oslo_concurrency.lockutils [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:07 compute-0 nova_compute[189608]: 2025-11-24 22:32:07.498 189613 DEBUG nova.compute.manager [req-2cbc1114-3ed9-40fb-8503-44a0606b5b61 req-4698b9b4-b5e8-48ec-b4ae-c2e71729a8f1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Received event network-vif-deleted-8f00051e-bd87-48eb-aba6-5dbf3d527aef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:32:07 compute-0 nova_compute[189608]: 2025-11-24 22:32:07.563 189613 DEBUG nova.compute.provider_tree [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:32:07 compute-0 nova_compute[189608]: 2025-11-24 22:32:07.578 189613 DEBUG nova.scheduler.client.report [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:32:07 compute-0 nova_compute[189608]: 2025-11-24 22:32:07.595 189613 DEBUG oslo_concurrency.lockutils [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:07 compute-0 nova_compute[189608]: 2025-11-24 22:32:07.623 189613 INFO nova.scheduler.client.report [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Deleted allocations for instance cf45f1e3-b80d-4213-80aa-995f57a9a476
Nov 24 22:32:07 compute-0 nova_compute[189608]: 2025-11-24 22:32:07.715 189613 DEBUG oslo_concurrency.lockutils [None req-6e905784-a96f-46e2-82e9-e02a23a46057 1599850e48894151b7909b89547cd9e2 4ac27d3d1c734f4bab455262f79d3106 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:08 compute-0 nova_compute[189608]: 2025-11-24 22:32:08.613 189613 DEBUG nova.compute.manager [req-f0542641-15cc-453a-b4bb-d5439d1e5e08 req-3fe4706c-cd35-411e-a377-2a9a62ad81fa c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Received event network-vif-plugged-8f00051e-bd87-48eb-aba6-5dbf3d527aef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:32:08 compute-0 nova_compute[189608]: 2025-11-24 22:32:08.613 189613 DEBUG oslo_concurrency.lockutils [req-f0542641-15cc-453a-b4bb-d5439d1e5e08 req-3fe4706c-cd35-411e-a377-2a9a62ad81fa c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:08 compute-0 nova_compute[189608]: 2025-11-24 22:32:08.613 189613 DEBUG oslo_concurrency.lockutils [req-f0542641-15cc-453a-b4bb-d5439d1e5e08 req-3fe4706c-cd35-411e-a377-2a9a62ad81fa c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:08 compute-0 nova_compute[189608]: 2025-11-24 22:32:08.614 189613 DEBUG oslo_concurrency.lockutils [req-f0542641-15cc-453a-b4bb-d5439d1e5e08 req-3fe4706c-cd35-411e-a377-2a9a62ad81fa c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "cf45f1e3-b80d-4213-80aa-995f57a9a476-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:08 compute-0 nova_compute[189608]: 2025-11-24 22:32:08.614 189613 DEBUG nova.compute.manager [req-f0542641-15cc-453a-b4bb-d5439d1e5e08 req-3fe4706c-cd35-411e-a377-2a9a62ad81fa c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] No waiting events found dispatching network-vif-plugged-8f00051e-bd87-48eb-aba6-5dbf3d527aef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:32:08 compute-0 nova_compute[189608]: 2025-11-24 22:32:08.615 189613 WARNING nova.compute.manager [req-f0542641-15cc-453a-b4bb-d5439d1e5e08 req-3fe4706c-cd35-411e-a377-2a9a62ad81fa c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Received unexpected event network-vif-plugged-8f00051e-bd87-48eb-aba6-5dbf3d527aef for instance with vm_state deleted and task_state None.
Nov 24 22:32:09 compute-0 nova_compute[189608]: 2025-11-24 22:32:09.109 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:10 compute-0 sshd-session[254160]: Invalid user ftp_inst from 185.217.1.246 port 16793
Nov 24 22:32:10 compute-0 podman[254243]: 2025-11-24 22:32:10.485668885 +0000 UTC m=+0.106880712 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.083 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Acquiring lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.083 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.128 189613 DEBUG nova.compute.manager [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.209 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.286 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.286 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.304 189613 DEBUG nova.virt.hardware [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.306 189613 INFO nova.compute.claims [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:32:11 compute-0 sshd-session[254160]: Disconnecting invalid user ftp_inst 185.217.1.246 port 16793: Change of username or service not allowed: (ftp_inst,ssh-connection) -> (user,ssh-connection) [preauth]
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.440 189613 DEBUG nova.compute.provider_tree [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.459 189613 DEBUG nova.scheduler.client.report [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.478 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.478 189613 DEBUG nova.compute.manager [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.524 189613 DEBUG nova.compute.manager [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.524 189613 DEBUG nova.network.neutron [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.537 189613 INFO nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.550 189613 DEBUG nova.compute.manager [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.619 189613 DEBUG nova.compute.manager [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.620 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.621 189613 INFO nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Creating image(s)
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.622 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Acquiring lock "/var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.622 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "/var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.623 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "/var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.636 189613 DEBUG oslo_concurrency.processutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.719 189613 DEBUG nova.policy [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ff3ad5c90cd47639553ad5015a81aca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b288cf23cdd049f48dfaafd888b33ea5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.724 189613 DEBUG oslo_concurrency.processutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.725 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Acquiring lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.725 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.742 189613 DEBUG oslo_concurrency.processutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.824 189613 DEBUG oslo_concurrency.processutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.825 189613 DEBUG oslo_concurrency.processutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.874 189613 DEBUG oslo_concurrency.processutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e,backing_fmt=raw /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.875 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.876 189613 DEBUG oslo_concurrency.processutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.949 189613 DEBUG oslo_concurrency.processutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.950 189613 DEBUG nova.virt.disk.api [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Checking if we can resize image /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:32:11 compute-0 nova_compute[189608]: 2025-11-24 22:32:11.951 189613 DEBUG oslo_concurrency.processutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:32:12 compute-0 nova_compute[189608]: 2025-11-24 22:32:12.031 189613 DEBUG oslo_concurrency.processutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:32:12 compute-0 nova_compute[189608]: 2025-11-24 22:32:12.032 189613 DEBUG nova.virt.disk.api [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Cannot resize image /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:32:12 compute-0 nova_compute[189608]: 2025-11-24 22:32:12.033 189613 DEBUG nova.objects.instance [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lazy-loading 'migration_context' on Instance uuid b256c629-ce31-4c1a-a7a6-ed66c07e691a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:32:12 compute-0 nova_compute[189608]: 2025-11-24 22:32:12.048 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:32:12 compute-0 nova_compute[189608]: 2025-11-24 22:32:12.049 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Ensure instance console log exists: /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:32:12 compute-0 nova_compute[189608]: 2025-11-24 22:32:12.049 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:12 compute-0 nova_compute[189608]: 2025-11-24 22:32:12.050 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:12 compute-0 nova_compute[189608]: 2025-11-24 22:32:12.050 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:12 compute-0 ovn_controller[97889]: 2025-11-24T22:32:12Z|00163|binding|INFO|Releasing lport ce3870c0-48db-470b-8d5d-479134c9b554 from this chassis (sb_readonly=0)
Nov 24 22:32:12 compute-0 ovn_controller[97889]: 2025-11-24T22:32:12Z|00164|binding|INFO|Releasing lport 7dcd4ddb-3860-49b9-87ed-1daf692defef from this chassis (sb_readonly=0)
Nov 24 22:32:12 compute-0 nova_compute[189608]: 2025-11-24 22:32:12.152 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:12.369 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:32:12 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:12.375 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:32:12 compute-0 nova_compute[189608]: 2025-11-24 22:32:12.380 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:12 compute-0 nova_compute[189608]: 2025-11-24 22:32:12.494 189613 DEBUG nova.network.neutron [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Successfully created port: 91fe5820-9c04-4bb0-94bb-b6c3068e81e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 22:32:14 compute-0 nova_compute[189608]: 2025-11-24 22:32:14.112 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:14 compute-0 nova_compute[189608]: 2025-11-24 22:32:14.334 189613 DEBUG nova.network.neutron [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Successfully updated port: 91fe5820-9c04-4bb0-94bb-b6c3068e81e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:32:14 compute-0 nova_compute[189608]: 2025-11-24 22:32:14.350 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Acquiring lock "refresh_cache-b256c629-ce31-4c1a-a7a6-ed66c07e691a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:32:14 compute-0 nova_compute[189608]: 2025-11-24 22:32:14.350 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Acquired lock "refresh_cache-b256c629-ce31-4c1a-a7a6-ed66c07e691a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:32:14 compute-0 nova_compute[189608]: 2025-11-24 22:32:14.351 189613 DEBUG nova.network.neutron [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:32:14 compute-0 nova_compute[189608]: 2025-11-24 22:32:14.438 189613 DEBUG nova.compute.manager [req-65ff4694-0d6c-4c3d-a11b-672be58027e8 req-89af2c70-d1da-44b1-a66b-69c11e942767 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Received event network-changed-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:32:14 compute-0 nova_compute[189608]: 2025-11-24 22:32:14.438 189613 DEBUG nova.compute.manager [req-65ff4694-0d6c-4c3d-a11b-672be58027e8 req-89af2c70-d1da-44b1-a66b-69c11e942767 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Refreshing instance network info cache due to event network-changed-91fe5820-9c04-4bb0-94bb-b6c3068e81e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:32:14 compute-0 nova_compute[189608]: 2025-11-24 22:32:14.439 189613 DEBUG oslo_concurrency.lockutils [req-65ff4694-0d6c-4c3d-a11b-672be58027e8 req-89af2c70-d1da-44b1-a66b-69c11e942767 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-b256c629-ce31-4c1a-a7a6-ed66c07e691a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:32:14 compute-0 nova_compute[189608]: 2025-11-24 22:32:14.524 189613 DEBUG nova.network.neutron [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:32:14 compute-0 podman[254282]: 2025-11-24 22:32:14.836300409 +0000 UTC m=+0.150658962 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.092 189613 DEBUG nova.network.neutron [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Updating instance_info_cache with network_info: [{"id": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "address": "fa:16:3e:dd:fb:5f", "network": {"id": "4153540b-1891-459a-9fd1-3ba9595f1a33", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1140659131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b288cf23cdd049f48dfaafd888b33ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91fe5820-9c", "ovs_interfaceid": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.134 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Releasing lock "refresh_cache-b256c629-ce31-4c1a-a7a6-ed66c07e691a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.135 189613 DEBUG nova.compute.manager [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Instance network_info: |[{"id": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "address": "fa:16:3e:dd:fb:5f", "network": {"id": "4153540b-1891-459a-9fd1-3ba9595f1a33", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1140659131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b288cf23cdd049f48dfaafd888b33ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91fe5820-9c", "ovs_interfaceid": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.136 189613 DEBUG oslo_concurrency.lockutils [req-65ff4694-0d6c-4c3d-a11b-672be58027e8 req-89af2c70-d1da-44b1-a66b-69c11e942767 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-b256c629-ce31-4c1a-a7a6-ed66c07e691a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.136 189613 DEBUG nova.network.neutron [req-65ff4694-0d6c-4c3d-a11b-672be58027e8 req-89af2c70-d1da-44b1-a66b-69c11e942767 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Refreshing network info cache for port 91fe5820-9c04-4bb0-94bb-b6c3068e81e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.141 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Start _get_guest_xml network_info=[{"id": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "address": "fa:16:3e:dd:fb:5f", "network": {"id": "4153540b-1891-459a-9fd1-3ba9595f1a33", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1140659131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b288cf23cdd049f48dfaafd888b33ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91fe5820-9c", "ovs_interfaceid": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.151 189613 WARNING nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.163 189613 DEBUG nova.virt.libvirt.host [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.164 189613 DEBUG nova.virt.libvirt.host [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.170 189613 DEBUG nova.virt.libvirt.host [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.170 189613 DEBUG nova.virt.libvirt.host [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.171 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.171 189613 DEBUG nova.virt.hardware [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:28:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a49f1e6c-1051-4dea-812e-0063121444a0',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:28:16Z,direct_url=<?>,disk_format='qcow2',id=ec71d7d5-c197-4331-bf8d-e2de71a8419f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='309342b7e3e849b2a5dd56651d8fa068',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:28:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.172 189613 DEBUG nova.virt.hardware [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.173 189613 DEBUG nova.virt.hardware [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.173 189613 DEBUG nova.virt.hardware [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.174 189613 DEBUG nova.virt.hardware [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.174 189613 DEBUG nova.virt.hardware [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.175 189613 DEBUG nova.virt.hardware [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.175 189613 DEBUG nova.virt.hardware [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.175 189613 DEBUG nova.virt.hardware [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.176 189613 DEBUG nova.virt.hardware [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.177 189613 DEBUG nova.virt.hardware [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.183 189613 DEBUG nova.virt.libvirt.vif [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:32:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1941519261',display_name='tempest-TestServerBasicOps-server-1941519261',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1941519261',id=14,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHhiiOkq9iUOdXjNpHmjtbZnprF40l9Ok8bS+ZGzP1suJLrBZla3uFjyC2KQSh8EA0qUn9JfUv9Ai00BpomIa3LapwlsNuCD5RdfS+v0/E5gEvSnYUQgcor+P5+PXWxL+A==',key_name='tempest-TestServerBasicOps-633402408',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b288cf23cdd049f48dfaafd888b33ea5',ramdisk_id='',reservation_id='r-05kp599q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-396408097',owner_user_name='tempest-TestServerBasicOps-396408097-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:32:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1ff3ad5c90cd47639553ad5015a81aca',uuid=b256c629-ce31-4c1a-a7a6-ed66c07e691a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "address": "fa:16:3e:dd:fb:5f", "network": {"id": "4153540b-1891-459a-9fd1-3ba9595f1a33", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1140659131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b288cf23cdd049f48dfaafd888b33ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91fe5820-9c", "ovs_interfaceid": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.185 189613 DEBUG nova.network.os_vif_util [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Converting VIF {"id": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "address": "fa:16:3e:dd:fb:5f", "network": {"id": "4153540b-1891-459a-9fd1-3ba9595f1a33", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1140659131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b288cf23cdd049f48dfaafd888b33ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91fe5820-9c", "ovs_interfaceid": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.186 189613 DEBUG nova.network.os_vif_util [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:fb:5f,bridge_name='br-int',has_traffic_filtering=True,id=91fe5820-9c04-4bb0-94bb-b6c3068e81e1,network=Network(4153540b-1891-459a-9fd1-3ba9595f1a33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91fe5820-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.188 189613 DEBUG nova.objects.instance [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lazy-loading 'pci_devices' on Instance uuid b256c629-ce31-4c1a-a7a6-ed66c07e691a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.201 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:32:16 compute-0 nova_compute[189608]:   <uuid>b256c629-ce31-4c1a-a7a6-ed66c07e691a</uuid>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   <name>instance-0000000e</name>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   <memory>131072</memory>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <nova:name>tempest-TestServerBasicOps-server-1941519261</nova:name>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:32:16</nova:creationTime>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <nova:flavor name="m1.nano">
Nov 24 22:32:16 compute-0 nova_compute[189608]:         <nova:memory>128</nova:memory>
Nov 24 22:32:16 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:32:16 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:32:16 compute-0 nova_compute[189608]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 22:32:16 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:32:16 compute-0 nova_compute[189608]:         <nova:user uuid="1ff3ad5c90cd47639553ad5015a81aca">tempest-TestServerBasicOps-396408097-project-member</nova:user>
Nov 24 22:32:16 compute-0 nova_compute[189608]:         <nova:project uuid="b288cf23cdd049f48dfaafd888b33ea5">tempest-TestServerBasicOps-396408097</nova:project>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="ec71d7d5-c197-4331-bf8d-e2de71a8419f"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:32:16 compute-0 nova_compute[189608]:         <nova:port uuid="91fe5820-9c04-4bb0-94bb-b6c3068e81e1">
Nov 24 22:32:16 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <system>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <entry name="serial">b256c629-ce31-4c1a-a7a6-ed66c07e691a</entry>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <entry name="uuid">b256c629-ce31-4c1a-a7a6-ed66c07e691a</entry>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     </system>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   <os>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   </os>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   <features>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   </features>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.config"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:dd:fb:5f"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <target dev="tap91fe5820-9c"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/console.log" append="off"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <video>
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     </video>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:32:16 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:32:16 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:32:16 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:32:16 compute-0 nova_compute[189608]: </domain>
Nov 24 22:32:16 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.203 189613 DEBUG nova.compute.manager [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Preparing to wait for external event network-vif-plugged-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.203 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Acquiring lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.203 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.204 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.204 189613 DEBUG nova.virt.libvirt.vif [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:32:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1941519261',display_name='tempest-TestServerBasicOps-server-1941519261',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1941519261',id=14,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHhiiOkq9iUOdXjNpHmjtbZnprF40l9Ok8bS+ZGzP1suJLrBZla3uFjyC2KQSh8EA0qUn9JfUv9Ai00BpomIa3LapwlsNuCD5RdfS+v0/E5gEvSnYUQgcor+P5+PXWxL+A==',key_name='tempest-TestServerBasicOps-633402408',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b288cf23cdd049f48dfaafd888b33ea5',ramdisk_id='',reservation_id='r-05kp599q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-396408097',owner_user_name='tempest-TestServerBasicOps-396408097-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:32:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1ff3ad5c90cd47639553ad5015a81aca',uuid=b256c629-ce31-4c1a-a7a6-ed66c07e691a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "address": "fa:16:3e:dd:fb:5f", "network": {"id": "4153540b-1891-459a-9fd1-3ba9595f1a33", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1140659131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b288cf23cdd049f48dfaafd888b33ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91fe5820-9c", "ovs_interfaceid": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.205 189613 DEBUG nova.network.os_vif_util [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Converting VIF {"id": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "address": "fa:16:3e:dd:fb:5f", "network": {"id": "4153540b-1891-459a-9fd1-3ba9595f1a33", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1140659131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b288cf23cdd049f48dfaafd888b33ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91fe5820-9c", "ovs_interfaceid": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.205 189613 DEBUG nova.network.os_vif_util [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:fb:5f,bridge_name='br-int',has_traffic_filtering=True,id=91fe5820-9c04-4bb0-94bb-b6c3068e81e1,network=Network(4153540b-1891-459a-9fd1-3ba9595f1a33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91fe5820-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.206 189613 DEBUG os_vif [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:fb:5f,bridge_name='br-int',has_traffic_filtering=True,id=91fe5820-9c04-4bb0-94bb-b6c3068e81e1,network=Network(4153540b-1891-459a-9fd1-3ba9595f1a33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91fe5820-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.206 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.207 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.207 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.212 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.215 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.215 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91fe5820-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.215 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91fe5820-9c, col_values=(('external_ids', {'iface-id': '91fe5820-9c04-4bb0-94bb-b6c3068e81e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:fb:5f', 'vm-uuid': 'b256c629-ce31-4c1a-a7a6-ed66c07e691a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.217 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.220 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:32:16 compute-0 NetworkManager[56413]: <info>  [1764023536.2211] manager: (tap91fe5820-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.227 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.227 189613 INFO os_vif [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:fb:5f,bridge_name='br-int',has_traffic_filtering=True,id=91fe5820-9c04-4bb0-94bb-b6c3068e81e1,network=Network(4153540b-1891-459a-9fd1-3ba9595f1a33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91fe5820-9c')
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.281 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.282 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.282 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] No VIF found with MAC fa:16:3e:dd:fb:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:32:16 compute-0 nova_compute[189608]: 2025-11-24 22:32:16.283 189613 INFO nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Using config drive
Nov 24 22:32:16 compute-0 podman[254302]: 2025-11-24 22:32:16.331372375 +0000 UTC m=+0.073809645 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 24 22:32:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:16.378 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.298 189613 INFO nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Creating config drive at /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.config
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.311 189613 DEBUG oslo_concurrency.processutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ye2yiqe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.466 189613 DEBUG oslo_concurrency.processutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ye2yiqe" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:32:17 compute-0 kernel: tap91fe5820-9c: entered promiscuous mode
Nov 24 22:32:17 compute-0 NetworkManager[56413]: <info>  [1764023537.5417] manager: (tap91fe5820-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Nov 24 22:32:17 compute-0 ovn_controller[97889]: 2025-11-24T22:32:17Z|00165|binding|INFO|Claiming lport 91fe5820-9c04-4bb0-94bb-b6c3068e81e1 for this chassis.
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.545 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:17 compute-0 ovn_controller[97889]: 2025-11-24T22:32:17Z|00166|binding|INFO|91fe5820-9c04-4bb0-94bb-b6c3068e81e1: Claiming fa:16:3e:dd:fb:5f 10.100.0.7
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.555 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:fb:5f 10.100.0.7'], port_security=['fa:16:3e:dd:fb:5f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b256c629-ce31-4c1a-a7a6-ed66c07e691a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4153540b-1891-459a-9fd1-3ba9595f1a33', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b288cf23cdd049f48dfaafd888b33ea5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '521c979b-bc8e-4599-9106-34a19dacb3c4 7e4ca47e-ffcf-44c4-8bda-8d7d73d6409e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dd1b5c5-bbef-41d9-9591-78d276d66648, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=91fe5820-9c04-4bb0-94bb-b6c3068e81e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.556 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 91fe5820-9c04-4bb0-94bb-b6c3068e81e1 in datapath 4153540b-1891-459a-9fd1-3ba9595f1a33 bound to our chassis
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.561 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4153540b-1891-459a-9fd1-3ba9595f1a33
Nov 24 22:32:17 compute-0 ovn_controller[97889]: 2025-11-24T22:32:17Z|00167|binding|INFO|Setting lport 91fe5820-9c04-4bb0-94bb-b6c3068e81e1 ovn-installed in OVS
Nov 24 22:32:17 compute-0 ovn_controller[97889]: 2025-11-24T22:32:17Z|00168|binding|INFO|Setting lport 91fe5820-9c04-4bb0-94bb-b6c3068e81e1 up in Southbound
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.568 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.576 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.576 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[aa154cbf-1a49-4c9e-94e3-06325eb9486b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.578 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4153540b-11 in ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 22:32:17 compute-0 systemd-machined[155884]: New machine qemu-15-instance-0000000e.
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.597 240020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4153540b-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.597 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[30c20114-5d40-4517-97d3-6099b67f6e6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.598 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[81f0c1c9-fd0c-4845-8a9c-180f90e872e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.609 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[3d76eb8b-eb1a-4416-a3ed-4b889143b0f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Nov 24 22:32:17 compute-0 systemd-udevd[254346]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.633 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.633 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.634 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.635 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[f8201f86-e5da-4b16-9eae-249248b13903]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:32:17 compute-0 NetworkManager[56413]: <info>  [1764023537.6458] device (tap91fe5820-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:32:17 compute-0 NetworkManager[56413]: <info>  [1764023537.6470] device (tap91fe5820-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.665 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764023522.6617239, c6cadef7-2599-4c75-a37d-2d1e6d469a82 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.665 189613 INFO nova.compute.manager [-] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] VM Stopped (Lifecycle Event)
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.667 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[571018b9-a0ba-41fb-969a-b8589b120b0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 systemd-udevd[254350]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.679 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a3bee9ba-6618-44bd-a443-da9fff6862a9', 'name': 'te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4a6957a775da42c9b535753d6b0279d6', 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'hostId': '81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b', 'status': 'active', 'metadata': {'metering.server_group': 'c6477657-e9b0-476c-83b3-9dc474e946c6'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:32:17 compute-0 NetworkManager[56413]: <info>  [1764023537.6808] manager: (tap4153540b-10): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.676 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb577f1-d529-4a13-be7d-0109601484df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.683 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f238e71a-660e-497c-8472-193245387bcf', 'name': 'tempest-ServerActionsTestJSON-server-1585588029', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '97e21ffeec1c4428ba3d70499fc3281f', 'user_id': '11288fa7771048b4a8faf1d6485ab059', 'hostId': '27625e982f38e3650ffe5ce8e3be255c7a5bc7b5228df6055671ee8e', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.683 189613 DEBUG nova.compute.manager [None req-3627ab39-a13f-4993-ad94-08eef99c6aa8 - - - - - -] [instance: c6cadef7-2599-4c75-a37d-2d1e6d469a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.686 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b256c629-ce31-4c1a-a7a6-ed66c07e691a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 24 22:32:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:17.687 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b256c629-ce31-4c1a-a7a6-ed66c07e691a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}64060fdce0de69c84bc4ec1fbe9dbdfbc4dd57ffec34862b3445d9841d3967a2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.715 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[fe07a0aa-6746-497d-bbe5-fd6e0aabce7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.722 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6f11e3-0cdf-4f76-a040-bbae6f059bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 NetworkManager[56413]: <info>  [1764023537.7426] device (tap4153540b-10): carrier: link connected
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.748 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[301ed47e-b977-46c1-b9c2-394d7d3c167b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.766 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[a445c483-79bb-43a1-ba4e-73a6d83c357b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4153540b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:07:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538523, 'reachable_time': 22550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254377, 'error': None, 'target': 'ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.783 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8d7aed-2b66-4fa7-a4e0-3ac10ee9a66b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe20:7f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538523, 'tstamp': 538523}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254378, 'error': None, 'target': 'ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.800 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[424ba075-ea33-4894-9707-b37bce4dca1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4153540b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:07:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538523, 'reachable_time': 22550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254379, 'error': None, 'target': 'ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.829 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[5843bf99-0e69-4363-b32f-f9ed7fdb8677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.893 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[b6be1047-7c0c-45b8-b36f-1aa9de48abb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.895 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4153540b-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.895 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.896 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4153540b-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:17 compute-0 kernel: tap4153540b-10: entered promiscuous mode
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.898 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:17 compute-0 NetworkManager[56413]: <info>  [1764023537.8989] manager: (tap4153540b-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.903 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.904 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4153540b-10, col_values=(('external_ids', {'iface-id': 'ea50a485-7290-4ceb-bc3f-d004948a33d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.905 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:17 compute-0 ovn_controller[97889]: 2025-11-24T22:32:17Z|00169|binding|INFO|Releasing lport ea50a485-7290-4ceb-bc3f-d004948a33d1 from this chassis (sb_readonly=0)
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.906 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.908 106776 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4153540b-1891-459a-9fd1-3ba9595f1a33.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4153540b-1891-459a-9fd1-3ba9595f1a33.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.909 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[deba685b-80b5-4e57-a14a-75dcdf4081e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.910 106776 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: global
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     log         /dev/log local0 debug
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     log-tag     haproxy-metadata-proxy-4153540b-1891-459a-9fd1-3ba9595f1a33
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     user        root
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     group       root
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     maxconn     1024
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     pidfile     /var/lib/neutron/external/pids/4153540b-1891-459a-9fd1-3ba9595f1a33.pid.haproxy
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     daemon
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: defaults
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     log global
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     mode http
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     option httplog
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     option dontlognull
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     option http-server-close
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     option forwardfor
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     retries                 3
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     timeout http-request    30s
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     timeout connect         30s
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     timeout client          32s
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     timeout server          32s
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     timeout http-keep-alive 30s
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: listen listener
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     bind 169.254.169.254:80
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:     http-request add-header X-OVN-Network-ID 4153540b-1891-459a-9fd1-3ba9595f1a33
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 22:32:17 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:17.913 106776 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33', 'env', 'PROCESS_TAG=haproxy-4153540b-1891-459a-9fd1-3ba9595f1a33', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4153540b-1891-459a-9fd1-3ba9595f1a33.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.922 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.991 189613 DEBUG nova.compute.manager [req-ac5d2618-5aed-4558-bba8-2dc281b3966b req-304235ac-2a5a-4c61-965b-7faa727d676e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Received event network-vif-plugged-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.993 189613 DEBUG oslo_concurrency.lockutils [req-ac5d2618-5aed-4558-bba8-2dc281b3966b req-304235ac-2a5a-4c61-965b-7faa727d676e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.993 189613 DEBUG oslo_concurrency.lockutils [req-ac5d2618-5aed-4558-bba8-2dc281b3966b req-304235ac-2a5a-4c61-965b-7faa727d676e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.994 189613 DEBUG oslo_concurrency.lockutils [req-ac5d2618-5aed-4558-bba8-2dc281b3966b req-304235ac-2a5a-4c61-965b-7faa727d676e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:17 compute-0 nova_compute[189608]: 2025-11-24 22:32:17.994 189613 DEBUG nova.compute.manager [req-ac5d2618-5aed-4558-bba8-2dc281b3966b req-304235ac-2a5a-4c61-965b-7faa727d676e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Processing event network-vif-plugged-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.060 189613 DEBUG nova.compute.manager [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.061 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023538.059908, b256c629-ce31-4c1a-a7a6-ed66c07e691a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.061 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] VM Started (Lifecycle Event)
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.067 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.072 189613 INFO nova.virt.libvirt.driver [-] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Instance spawned successfully.
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.072 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.077 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.085 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.101 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.101 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.102 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.103 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.103 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.104 189613 DEBUG nova.virt.libvirt.driver [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.126 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.127 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023538.0600212, b256c629-ce31-4c1a-a7a6-ed66c07e691a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.127 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] VM Paused (Lifecycle Event)
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.153 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.158 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023538.066382, b256c629-ce31-4c1a-a7a6-ed66c07e691a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.158 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] VM Resumed (Lifecycle Event)
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.165 189613 INFO nova.compute.manager [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Took 6.54 seconds to spawn the instance on the hypervisor.
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.165 189613 DEBUG nova.compute.manager [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.178 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.183 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.212 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.245 189613 INFO nova.compute.manager [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Took 7.06 seconds to build instance.
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.260 189613 DEBUG oslo_concurrency.lockutils [None req-28cb0a87-2277-445d-aee3-71da50c2812b 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:18 compute-0 podman[254418]: 2025-11-24 22:32:18.364628642 +0000 UTC m=+0.063862135 container create 18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 24 22:32:18 compute-0 systemd[1]: Started libpod-conmon-18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344.scope.
Nov 24 22:32:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.427 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1782 Content-Type: application/json Date: Mon, 24 Nov 2025 22:32:17 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a1e53e54-48a4-4335-8da1-822020a66fe0 x-openstack-request-id: req-a1e53e54-48a4-4335-8da1-822020a66fe0 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.427 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b256c629-ce31-4c1a-a7a6-ed66c07e691a", "name": "tempest-TestServerBasicOps-server-1941519261", "status": "BUILD", "tenant_id": "b288cf23cdd049f48dfaafd888b33ea5", "user_id": "1ff3ad5c90cd47639553ad5015a81aca", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "d97a97044cb8c62d7756f0392cec4b65b1422bfcb64299fdba38cdec", "image": {"id": "ec71d7d5-c197-4331-bf8d-e2de71a8419f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ec71d7d5-c197-4331-bf8d-e2de71a8419f"}]}, "flavor": {"id": "a49f1e6c-1051-4dea-812e-0063121444a0", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a49f1e6c-1051-4dea-812e-0063121444a0"}]}, "created": "2025-11-24T22:32:10Z", "updated": "2025-11-24T22:32:11Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b256c629-ce31-4c1a-a7a6-ed66c07e691a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b256c629-ce31-4c1a-a7a6-ed66c07e691a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-633402408", "OS-SRV-USG:launched_at": null, "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1015568497"}, {"name": "tempest-securitygroup--1757933774"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "spawning", "OS-EXT-STS:vm_state": "building", "OS-EXT-STS:power_state": 0, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.427 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b256c629-ce31-4c1a-a7a6-ed66c07e691a used request id req-a1e53e54-48a4-4335-8da1-822020a66fe0 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.429 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b256c629-ce31-4c1a-a7a6-ed66c07e691a', 'name': 'tempest-TestServerBasicOps-server-1941519261', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ec71d7d5-c197-4331-bf8d-e2de71a8419f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b288cf23cdd049f48dfaafd888b33ea5', 'user_id': '1ff3ad5c90cd47639553ad5015a81aca', 'hostId': 'd97a97044cb8c62d7756f0392cec4b65b1422bfcb64299fdba38cdec', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.429 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.429 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.429 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bbd655e4dd91066498d36b5b5c8a4af74f1c56aba8acd268028f1fca84f58fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:32:18.429740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 podman[254418]: 2025-11-24 22:32:18.336153458 +0000 UTC m=+0.035386971 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.434 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.442 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.446 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b256c629-ce31-4c1a-a7a6-ed66c07e691a / tap91fe5820-9c inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.446 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.447 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.447 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.447 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.447 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.447 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.447 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.448 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.448 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.448 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.448 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.448 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.449 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.449 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.449 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 podman[254418]: 2025-11-24 22:32:18.450575003 +0000 UTC m=+0.149808516 container init 18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.451 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:32:18.447601) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.451 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:32:18.449247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 podman[254418]: 2025-11-24 22:32:18.459206931 +0000 UTC m=+0.158440424 container start 18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.463 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.464 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.477 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.478 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33[254433]: [NOTICE]   (254437) : New worker (254439) forked
Nov 24 22:32:18 compute-0 neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33[254433]: [NOTICE]   (254437) : Loading success.
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.500 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.501 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.502 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:32:18.502606) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.537 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 28965888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.537 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.572 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.read.bytes volume: 32036864 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.573 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.612 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.613 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.614 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.614 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.614 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.614 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.614 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.614 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 1026400534 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.615 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 91052482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.615 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.read.latency volume: 1459742376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.616 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.read.latency volume: 95113762 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.617 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:32:18.614717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.617 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.617 189613 DEBUG nova.network.neutron [req-65ff4694-0d6c-4c3d-a11b-672be58027e8 req-89af2c70-d1da-44b1-a66b-69c11e942767 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Updated VIF entry in instance network info cache for port 91fe5820-9c04-4bb0-94bb-b6c3068e81e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.618 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.618 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.618 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.618 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.618 189613 DEBUG nova.network.neutron [req-65ff4694-0d6c-4c3d-a11b-672be58027e8 req-89af2c70-d1da-44b1-a66b-69c11e942767 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Updating instance_info_cache with network_info: [{"id": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "address": "fa:16:3e:dd:fb:5f", "network": {"id": "4153540b-1891-459a-9fd1-3ba9595f1a33", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1140659131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b288cf23cdd049f48dfaafd888b33ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91fe5820-9c", "ovs_interfaceid": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:32:18.618664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 nova_compute[189608]: 2025-11-24 22:32:18.637 189613 DEBUG oslo_concurrency.lockutils [req-65ff4694-0d6c-4c3d-a11b-672be58027e8 req-89af2c70-d1da-44b1-a66b-69c11e942767 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-b256c629-ce31-4c1a-a7a6-ed66c07e691a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.650 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/cpu volume: 112280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.669 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/cpu volume: 34710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.695 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/cpu volume: 550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.696 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.696 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.696 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.697 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.697 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 1041 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.697 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.698 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.read.requests volume: 1212 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.699 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.698 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:32:18.696939) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.699 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.700 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.701 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.701 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.702 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.702 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.usage volume: 30146560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:32:18.701563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.703 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.703 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.704 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.705 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.706 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 72900608 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.706 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.707 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.write.bytes volume: 311296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.708 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.708 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.708 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.709 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.710 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.710 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.710 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.711 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 4091704388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:32:18.706245) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.711 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.712 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.write.latency volume: 78936569 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.712 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.712 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.713 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:32:18.710808) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.714 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.714 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.715 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 298 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.715 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.715 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.write.requests volume: 35 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.716 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.716 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:32:18.714918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.717 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.718 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.718 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.718 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.719 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:32:18.718909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.720 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.720 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.721 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.722 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.722 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes.delta volume: 1262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.722 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.incoming.bytes.delta volume: 1341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.723 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:32:18.722135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.724 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.726 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:32:18.725163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.727 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.727 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1941519261>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1941519261>]
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.728 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-24T22:32:18.727291) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.731 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.731 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.732 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.732 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.734 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.734 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.734 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.734 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.735 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.734 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:32:18.728968) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.735 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.736 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:32:18.731658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.736 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.737 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.737 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.738 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.739 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.739 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.739 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.739 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.740 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.741 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.outgoing.bytes volume: 1278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.741 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.742 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.742 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.742 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.742 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.743 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.743 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.outgoing.bytes.delta volume: 1278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.744 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.745 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.745 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.745 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.745 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.746 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.746 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.746 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/memory.usage volume: 43.3046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.746 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/memory.usage volume: 42.4921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.747 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:32:18.734611) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.747 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.747 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:32:18.737308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.747 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance b256c629-ce31-4c1a-a7a6-ed66c07e691a: ceilometer.compute.pollsters.NoVolumeException
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.748 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.748 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.748 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.748 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.748 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.749 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.749 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.749 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1941519261>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1941519261>]
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.749 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.750 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.749 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:32:18.740065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.750 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.750 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.750 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:32:18.743161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:32:18.746180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.751 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.751 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.incoming.bytes volume: 1431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-24T22:32:18.749063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.752 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.752 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.752 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.752 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.752 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.752 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.753 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.753 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.753 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.753 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.allocation volume: 31137792 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.754 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.754 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.754 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.755 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.755 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.755 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.755 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.755 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.755 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.755 14 DEBUG ceilometer.compute.pollsters [-] f238e71a-660e-497c-8472-193245387bcf/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.756 14 DEBUG ceilometer.compute.pollsters [-] b256c629-ce31-4c1a-a7a6-ed66c07e691a/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.756 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:32:18.750736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.756 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.756 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:32:18.753073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.757 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:32:18.755567) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:32:18.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:32:19 compute-0 nova_compute[189608]: 2025-11-24 22:32:19.115 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:19 compute-0 sshd-session[254281]: Invalid user user from 185.217.1.246 port 51839
Nov 24 22:32:21 compute-0 nova_compute[189608]: 2025-11-24 22:32:21.112 189613 DEBUG nova.compute.manager [req-0aae48aa-45bd-46be-8406-b8003625c50b req-428940b3-198d-472e-8c65-da233ce1541c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Received event network-vif-plugged-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:32:21 compute-0 nova_compute[189608]: 2025-11-24 22:32:21.113 189613 DEBUG oslo_concurrency.lockutils [req-0aae48aa-45bd-46be-8406-b8003625c50b req-428940b3-198d-472e-8c65-da233ce1541c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:21 compute-0 nova_compute[189608]: 2025-11-24 22:32:21.113 189613 DEBUG oslo_concurrency.lockutils [req-0aae48aa-45bd-46be-8406-b8003625c50b req-428940b3-198d-472e-8c65-da233ce1541c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:21 compute-0 nova_compute[189608]: 2025-11-24 22:32:21.114 189613 DEBUG oslo_concurrency.lockutils [req-0aae48aa-45bd-46be-8406-b8003625c50b req-428940b3-198d-472e-8c65-da233ce1541c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:21 compute-0 nova_compute[189608]: 2025-11-24 22:32:21.115 189613 DEBUG nova.compute.manager [req-0aae48aa-45bd-46be-8406-b8003625c50b req-428940b3-198d-472e-8c65-da233ce1541c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] No waiting events found dispatching network-vif-plugged-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:32:21 compute-0 nova_compute[189608]: 2025-11-24 22:32:21.115 189613 WARNING nova.compute.manager [req-0aae48aa-45bd-46be-8406-b8003625c50b req-428940b3-198d-472e-8c65-da233ce1541c c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Received unexpected event network-vif-plugged-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 for instance with vm_state active and task_state None.
Nov 24 22:32:21 compute-0 nova_compute[189608]: 2025-11-24 22:32:21.185 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764023526.1847053, cf45f1e3-b80d-4213-80aa-995f57a9a476 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:32:21 compute-0 nova_compute[189608]: 2025-11-24 22:32:21.186 189613 INFO nova.compute.manager [-] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] VM Stopped (Lifecycle Event)
Nov 24 22:32:21 compute-0 nova_compute[189608]: 2025-11-24 22:32:21.220 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:21 compute-0 nova_compute[189608]: 2025-11-24 22:32:21.222 189613 DEBUG nova.compute.manager [None req-5b7e376c-a8c2-4340-ad03-d9d628ce08f7 - - - - - -] [instance: cf45f1e3-b80d-4213-80aa-995f57a9a476] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:32:22 compute-0 nova_compute[189608]: 2025-11-24 22:32:22.445 189613 DEBUG nova.compute.manager [req-425355f3-090d-4604-aba6-680458eead9b req-5645dafc-58d0-40a8-9b2c-869bcdf79201 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Received event network-changed-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:32:22 compute-0 nova_compute[189608]: 2025-11-24 22:32:22.445 189613 DEBUG nova.compute.manager [req-425355f3-090d-4604-aba6-680458eead9b req-5645dafc-58d0-40a8-9b2c-869bcdf79201 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Refreshing instance network info cache due to event network-changed-91fe5820-9c04-4bb0-94bb-b6c3068e81e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:32:22 compute-0 nova_compute[189608]: 2025-11-24 22:32:22.446 189613 DEBUG oslo_concurrency.lockutils [req-425355f3-090d-4604-aba6-680458eead9b req-5645dafc-58d0-40a8-9b2c-869bcdf79201 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-b256c629-ce31-4c1a-a7a6-ed66c07e691a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:32:22 compute-0 nova_compute[189608]: 2025-11-24 22:32:22.446 189613 DEBUG oslo_concurrency.lockutils [req-425355f3-090d-4604-aba6-680458eead9b req-5645dafc-58d0-40a8-9b2c-869bcdf79201 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-b256c629-ce31-4c1a-a7a6-ed66c07e691a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:32:22 compute-0 nova_compute[189608]: 2025-11-24 22:32:22.446 189613 DEBUG nova.network.neutron [req-425355f3-090d-4604-aba6-680458eead9b req-5645dafc-58d0-40a8-9b2c-869bcdf79201 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Refreshing network info cache for port 91fe5820-9c04-4bb0-94bb-b6c3068e81e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:32:22 compute-0 nova_compute[189608]: 2025-11-24 22:32:22.707 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:24 compute-0 nova_compute[189608]: 2025-11-24 22:32:24.118 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:24 compute-0 nova_compute[189608]: 2025-11-24 22:32:24.298 189613 DEBUG nova.network.neutron [req-425355f3-090d-4604-aba6-680458eead9b req-5645dafc-58d0-40a8-9b2c-869bcdf79201 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Updated VIF entry in instance network info cache for port 91fe5820-9c04-4bb0-94bb-b6c3068e81e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:32:24 compute-0 nova_compute[189608]: 2025-11-24 22:32:24.299 189613 DEBUG nova.network.neutron [req-425355f3-090d-4604-aba6-680458eead9b req-5645dafc-58d0-40a8-9b2c-869bcdf79201 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Updating instance_info_cache with network_info: [{"id": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "address": "fa:16:3e:dd:fb:5f", "network": {"id": "4153540b-1891-459a-9fd1-3ba9595f1a33", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1140659131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b288cf23cdd049f48dfaafd888b33ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91fe5820-9c", "ovs_interfaceid": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:32:24 compute-0 nova_compute[189608]: 2025-11-24 22:32:24.323 189613 DEBUG oslo_concurrency.lockutils [req-425355f3-090d-4604-aba6-680458eead9b req-5645dafc-58d0-40a8-9b2c-869bcdf79201 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-b256c629-ce31-4c1a-a7a6-ed66c07e691a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:32:24 compute-0 podman[254449]: 2025-11-24 22:32:24.562167264 +0000 UTC m=+0.117191432 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, name=ubi9, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, managed_by=edpm_ansible, release-0.7.12=, version=9.4, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 24 22:32:24 compute-0 podman[254450]: 2025-11-24 22:32:24.566615622 +0000 UTC m=+0.106769839 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7)
Nov 24 22:32:24 compute-0 podman[254456]: 2025-11-24 22:32:24.581641619 +0000 UTC m=+0.117422219 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 22:32:26 compute-0 nova_compute[189608]: 2025-11-24 22:32:26.223 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:27 compute-0 nova_compute[189608]: 2025-11-24 22:32:27.130 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:28 compute-0 sshd-session[254281]: error: maximum authentication attempts exceeded for invalid user user from 185.217.1.246 port 51839 ssh2 [preauth]
Nov 24 22:32:28 compute-0 sshd-session[254281]: Disconnecting invalid user user 185.217.1.246 port 51839: Too many authentication failures [preauth]
Nov 24 22:32:29 compute-0 nova_compute[189608]: 2025-11-24 22:32:29.120 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:29 compute-0 podman[254505]: 2025-11-24 22:32:29.533857515 +0000 UTC m=+0.089760690 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:32:29 compute-0 podman[203795]: time="2025-11-24T22:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:32:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31991 "" "Go-http-client/1.1"
Nov 24 22:32:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5733 "" "Go-http-client/1.1"
Nov 24 22:32:30 compute-0 nova_compute[189608]: 2025-11-24 22:32:30.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:32:30 compute-0 nova_compute[189608]: 2025-11-24 22:32:30.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:32:31 compute-0 nova_compute[189608]: 2025-11-24 22:32:31.062 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:32:31 compute-0 nova_compute[189608]: 2025-11-24 22:32:31.062 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:32:31 compute-0 nova_compute[189608]: 2025-11-24 22:32:31.062 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:32:31 compute-0 nova_compute[189608]: 2025-11-24 22:32:31.226 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:31 compute-0 openstack_network_exporter[205945]: ERROR   22:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:32:31 compute-0 openstack_network_exporter[205945]: ERROR   22:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:32:31 compute-0 openstack_network_exporter[205945]: ERROR   22:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:32:31 compute-0 openstack_network_exporter[205945]: ERROR   22:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:32:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:32:31 compute-0 openstack_network_exporter[205945]: ERROR   22:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:32:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:32:31 compute-0 podman[254532]: 2025-11-24 22:32:31.546559354 +0000 UTC m=+0.085075055 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 22:32:31 compute-0 podman[254531]: 2025-11-24 22:32:31.584263485 +0000 UTC m=+0.139450054 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller)
Nov 24 22:32:33 compute-0 nova_compute[189608]: 2025-11-24 22:32:33.357 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updating instance_info_cache with network_info: [{"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:32:33 compute-0 nova_compute[189608]: 2025-11-24 22:32:33.375 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:32:33 compute-0 nova_compute[189608]: 2025-11-24 22:32:33.376 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.127 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.148 189613 DEBUG oslo_concurrency.lockutils [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquiring lock "f238e71a-660e-497c-8472-193245387bcf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.150 189613 DEBUG oslo_concurrency.lockutils [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.151 189613 DEBUG oslo_concurrency.lockutils [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquiring lock "f238e71a-660e-497c-8472-193245387bcf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.152 189613 DEBUG oslo_concurrency.lockutils [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.153 189613 DEBUG oslo_concurrency.lockutils [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.157 189613 INFO nova.compute.manager [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Terminating instance
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.160 189613 DEBUG nova.compute.manager [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:32:34 compute-0 kernel: tapfdd48bd9-f9 (unregistering): left promiscuous mode
Nov 24 22:32:34 compute-0 NetworkManager[56413]: <info>  [1764023554.2129] device (tapfdd48bd9-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:32:34 compute-0 ovn_controller[97889]: 2025-11-24T22:32:34Z|00170|binding|INFO|Releasing lport fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 from this chassis (sb_readonly=0)
Nov 24 22:32:34 compute-0 ovn_controller[97889]: 2025-11-24T22:32:34Z|00171|binding|INFO|Setting lport fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 down in Southbound
Nov 24 22:32:34 compute-0 ovn_controller[97889]: 2025-11-24T22:32:34Z|00172|binding|INFO|Removing iface tapfdd48bd9-f9 ovn-installed in OVS
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.217 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.226 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:34 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:34.235 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:76:1e 10.100.0.12'], port_security=['fa:16:3e:40:76:1e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f238e71a-660e-497c-8472-193245387bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '97e21ffeec1c4428ba3d70499fc3281f', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f7d22eb6-0a82-485c-96cc-cd31ea984470', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.192', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a063a9f4-1c3d-438a-9e7c-e5a5c01b330e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.236 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:34 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:34.238 106776 INFO neutron.agent.ovn.metadata.agent [-] Port fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 in datapath 29585b3c-5eec-4652-ae2f-4aa9ec19d924 unbound from our chassis
Nov 24 22:32:34 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:34.241 106776 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 29585b3c-5eec-4652-ae2f-4aa9ec19d924, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 22:32:34 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:34.243 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1db48d-550d-4ea9-b5b7-5f632d6969bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:34 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:34.243 106776 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924 namespace which is not needed anymore
Nov 24 22:32:34 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 24 22:32:34 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000009.scope: Consumed 43.829s CPU time.
Nov 24 22:32:34 compute-0 systemd-machined[155884]: Machine qemu-14-instance-00000009 terminated.
Nov 24 22:32:34 compute-0 ovn_controller[97889]: 2025-11-24T22:32:34Z|00173|binding|INFO|Releasing lport ce3870c0-48db-470b-8d5d-479134c9b554 from this chassis (sb_readonly=0)
Nov 24 22:32:34 compute-0 ovn_controller[97889]: 2025-11-24T22:32:34Z|00174|binding|INFO|Releasing lport 7dcd4ddb-3860-49b9-87ed-1daf692defef from this chassis (sb_readonly=0)
Nov 24 22:32:34 compute-0 ovn_controller[97889]: 2025-11-24T22:32:34Z|00175|binding|INFO|Releasing lport ea50a485-7290-4ceb-bc3f-d004948a33d1 from this chassis (sb_readonly=0)
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.438 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.471 189613 DEBUG nova.compute.manager [req-df3a7874-a632-4904-a020-b0e7425d0226 req-8b8e8ca6-8aec-417b-b104-986ec0af26bb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received event network-vif-unplugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.472 189613 DEBUG oslo_concurrency.lockutils [req-df3a7874-a632-4904-a020-b0e7425d0226 req-8b8e8ca6-8aec-417b-b104-986ec0af26bb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "f238e71a-660e-497c-8472-193245387bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.472 189613 DEBUG oslo_concurrency.lockutils [req-df3a7874-a632-4904-a020-b0e7425d0226 req-8b8e8ca6-8aec-417b-b104-986ec0af26bb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.473 189613 DEBUG oslo_concurrency.lockutils [req-df3a7874-a632-4904-a020-b0e7425d0226 req-8b8e8ca6-8aec-417b-b104-986ec0af26bb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.473 189613 DEBUG nova.compute.manager [req-df3a7874-a632-4904-a020-b0e7425d0226 req-8b8e8ca6-8aec-417b-b104-986ec0af26bb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] No waiting events found dispatching network-vif-unplugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.473 189613 DEBUG nova.compute.manager [req-df3a7874-a632-4904-a020-b0e7425d0226 req-8b8e8ca6-8aec-417b-b104-986ec0af26bb c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received event network-vif-unplugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.478 189613 INFO nova.virt.libvirt.driver [-] [instance: f238e71a-660e-497c-8472-193245387bcf] Instance destroyed successfully.
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.478 189613 DEBUG nova.objects.instance [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lazy-loading 'resources' on Instance uuid f238e71a-660e-497c-8472-193245387bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.491 189613 DEBUG nova.virt.libvirt.vif [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:29:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1585588029',display_name='tempest-ServerActionsTestJSON-server-1585588029',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1585588029',id=9,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLE9GM2vVS7DtVhD6R5uAcKdwWIHZiUIj0cZuSYgN8E0Q128lQ7w/rrfvzePQt5xD3e+tmmR17Qm6/SP88RdZiNDkcZe488bZoDDPSOfWrMiNmhlRVlcu8KaGfz+0SLYw==',key_name='tempest-keypair-731506490',keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:30:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='97e21ffeec1c4428ba3d70499fc3281f',ramdisk_id='',reservation_id='r-0mavx5gw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2097692874',owner_user_name='tempest-ServerActionsTestJSON-2097692874-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:31:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11288fa7771048b4a8faf1d6485ab059',uuid=f238e71a-660e-497c-8472-193245387bcf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.492 189613 DEBUG nova.network.os_vif_util [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Converting VIF {"id": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "address": "fa:16:3e:40:76:1e", "network": {"id": "29585b3c-5eec-4652-ae2f-4aa9ec19d924", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-98517945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97e21ffeec1c4428ba3d70499fc3281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd48bd9-f9", "ovs_interfaceid": "fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.492 189613 DEBUG nova.network.os_vif_util [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.493 189613 DEBUG os_vif [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.494 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.494 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdd48bd9-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.496 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.498 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.500 189613 INFO os_vif [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:76:1e,bridge_name='br-int',has_traffic_filtering=True,id=fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13,network=Network(29585b3c-5eec-4652-ae2f-4aa9ec19d924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd48bd9-f9')
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.501 189613 INFO nova.virt.libvirt.driver [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Deleting instance files /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf_del
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.501 189613 INFO nova.virt.libvirt.driver [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Deletion of /var/lib/nova/instances/f238e71a-660e-497c-8472-193245387bcf_del complete
Nov 24 22:32:34 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[253843]: [NOTICE]   (253847) : haproxy version is 2.8.14-c23fe91
Nov 24 22:32:34 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[253843]: [NOTICE]   (253847) : path to executable is /usr/sbin/haproxy
Nov 24 22:32:34 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[253843]: [WARNING]  (253847) : Exiting Master process...
Nov 24 22:32:34 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[253843]: [ALERT]    (253847) : Current worker (253849) exited with code 143 (Terminated)
Nov 24 22:32:34 compute-0 neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924[253843]: [WARNING]  (253847) : All workers exited. Exiting... (0)
Nov 24 22:32:34 compute-0 systemd[1]: libpod-3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c.scope: Deactivated successfully.
Nov 24 22:32:34 compute-0 podman[254592]: 2025-11-24 22:32:34.515017791 +0000 UTC m=+0.134687646 container died 3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.578 189613 INFO nova.compute.manager [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Took 0.42 seconds to destroy the instance on the hypervisor.
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.579 189613 DEBUG oslo.service.loopingcall [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.579 189613 DEBUG nova.compute.manager [-] [instance: f238e71a-660e-497c-8472-193245387bcf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.579 189613 DEBUG nova.network.neutron [-] [instance: f238e71a-660e-497c-8472-193245387bcf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:32:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c-userdata-shm.mount: Deactivated successfully.
Nov 24 22:32:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-89b0f2899f8766daddc9797cb8372a50627b0f67d92e62aec33eda28b1794e08-merged.mount: Deactivated successfully.
Nov 24 22:32:34 compute-0 podman[254592]: 2025-11-24 22:32:34.723788038 +0000 UTC m=+0.343457913 container cleanup 3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 22:32:34 compute-0 sshd-session[254529]: Invalid user user from 185.217.1.246 port 45447
Nov 24 22:32:34 compute-0 systemd[1]: libpod-conmon-3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c.scope: Deactivated successfully.
Nov 24 22:32:34 compute-0 podman[254631]: 2025-11-24 22:32:34.954553359 +0000 UTC m=+0.189619254 container remove 3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 22:32:34 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:34.968 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[6e6bc270-5934-45be-87c9-34cff5595261]: (4, ('Mon Nov 24 10:32:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924 (3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c)\n3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c\nMon Nov 24 10:32:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924 (3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c)\n3ee7539097e986791d59e38728eb67ee91b476e1d9039d513c39cd8168d12e3c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:34 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:34.971 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[339aee08-6813-44e7-9ee4-2685956544f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:34 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:34.973 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29585b3c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.977 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:34 compute-0 kernel: tap29585b3c-50: left promiscuous mode
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.981 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:34 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:34.989 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8789ec-7a0a-46ff-9642-63ca5c2d69ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:34 compute-0 nova_compute[189608]: 2025-11-24 22:32:34.998 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:35 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:35.016 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[57ce3e1c-b684-40f8-88d0-61659eebcc2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:35 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:35.018 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1fdd58-4c97-4b75-b249-7d321ad7a192]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:35 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:35.035 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[d7144017-b078-4d4c-930c-f581341def34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533627, 'reachable_time': 41560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254645, 'error': None, 'target': 'ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d29585b3c\x2d5eec\x2d4652\x2dae2f\x2d4aa9ec19d924.mount: Deactivated successfully.
Nov 24 22:32:35 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:35.041 106891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-29585b3c-5eec-4652-ae2f-4aa9ec19d924 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 22:32:35 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:35.041 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[fcfa230d-3315-4b21-b79b-82700850bf6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:32:35 compute-0 nova_compute[189608]: 2025-11-24 22:32:35.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:32:35 compute-0 nova_compute[189608]: 2025-11-24 22:32:35.821 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:35 compute-0 nova_compute[189608]: 2025-11-24 22:32:35.822 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:35 compute-0 nova_compute[189608]: 2025-11-24 22:32:35.823 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:35 compute-0 nova_compute[189608]: 2025-11-24 22:32:35.824 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:32:35 compute-0 nova_compute[189608]: 2025-11-24 22:32:35.892 189613 DEBUG nova.network.neutron [-] [instance: f238e71a-660e-497c-8472-193245387bcf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:32:35 compute-0 nova_compute[189608]: 2025-11-24 22:32:35.954 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:32:35 compute-0 nova_compute[189608]: 2025-11-24 22:32:35.978 189613 INFO nova.compute.manager [-] [instance: f238e71a-660e-497c-8472-193245387bcf] Took 1.40 seconds to deallocate network for instance.
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.020 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.021 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.083 189613 DEBUG oslo_concurrency.lockutils [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.087 189613 DEBUG oslo_concurrency.lockutils [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.090 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.112 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.210 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.212 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.269 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.384 189613 DEBUG nova.compute.provider_tree [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.402 189613 DEBUG nova.scheduler.client.report [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.426 189613 DEBUG oslo_concurrency.lockutils [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.473 189613 INFO nova.scheduler.client.report [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Deleted allocations for instance f238e71a-660e-497c-8472-193245387bcf
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.663 189613 DEBUG oslo_concurrency.lockutils [None req-1a1b50b6-7fac-47cd-9fbb-082dc9e0072f 11288fa7771048b4a8faf1d6485ab059 97e21ffeec1c4428ba3d70499fc3281f - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.673 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.674 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5004MB free_disk=72.09735870361328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.674 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.675 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.736 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.736 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance b256c629-ce31-4c1a-a7a6-ed66c07e691a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.737 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.737 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.800 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.812 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.916 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:32:36 compute-0 nova_compute[189608]: 2025-11-24 22:32:36.917 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:36 compute-0 sshd-session[254529]: Disconnecting invalid user user 185.217.1.246 port 45447: Change of username or service not allowed: (user,ssh-connection) -> (dqi,ssh-connection) [preauth]
Nov 24 22:32:37 compute-0 sshd-session[254660]: Invalid user sol from 45.148.10.240 port 44602
Nov 24 22:32:37 compute-0 nova_compute[189608]: 2025-11-24 22:32:37.121 189613 DEBUG nova.compute.manager [req-89384ce1-313d-42d8-a618-ba9ddb7fac25 req-b098ff67-8478-4d96-bbc6-1ad592fd479e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received event network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:32:37 compute-0 nova_compute[189608]: 2025-11-24 22:32:37.121 189613 DEBUG oslo_concurrency.lockutils [req-89384ce1-313d-42d8-a618-ba9ddb7fac25 req-b098ff67-8478-4d96-bbc6-1ad592fd479e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "f238e71a-660e-497c-8472-193245387bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:37 compute-0 nova_compute[189608]: 2025-11-24 22:32:37.121 189613 DEBUG oslo_concurrency.lockutils [req-89384ce1-313d-42d8-a618-ba9ddb7fac25 req-b098ff67-8478-4d96-bbc6-1ad592fd479e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:37 compute-0 nova_compute[189608]: 2025-11-24 22:32:37.122 189613 DEBUG oslo_concurrency.lockutils [req-89384ce1-313d-42d8-a618-ba9ddb7fac25 req-b098ff67-8478-4d96-bbc6-1ad592fd479e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "f238e71a-660e-497c-8472-193245387bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:37 compute-0 nova_compute[189608]: 2025-11-24 22:32:37.122 189613 DEBUG nova.compute.manager [req-89384ce1-313d-42d8-a618-ba9ddb7fac25 req-b098ff67-8478-4d96-bbc6-1ad592fd479e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] No waiting events found dispatching network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:32:37 compute-0 nova_compute[189608]: 2025-11-24 22:32:37.122 189613 WARNING nova.compute.manager [req-89384ce1-313d-42d8-a618-ba9ddb7fac25 req-b098ff67-8478-4d96-bbc6-1ad592fd479e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received unexpected event network-vif-plugged-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 for instance with vm_state deleted and task_state None.
Nov 24 22:32:37 compute-0 nova_compute[189608]: 2025-11-24 22:32:37.122 189613 DEBUG nova.compute.manager [req-89384ce1-313d-42d8-a618-ba9ddb7fac25 req-b098ff67-8478-4d96-bbc6-1ad592fd479e c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: f238e71a-660e-497c-8472-193245387bcf] Received event network-vif-deleted-fdd48bd9-f9d2-4453-ab6d-6fb1dbe74e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:32:37 compute-0 sshd-session[254660]: Connection closed by invalid user sol 45.148.10.240 port 44602 [preauth]
Nov 24 22:32:39 compute-0 nova_compute[189608]: 2025-11-24 22:32:39.127 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:39 compute-0 nova_compute[189608]: 2025-11-24 22:32:39.498 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:39 compute-0 nova_compute[189608]: 2025-11-24 22:32:39.920 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:32:41 compute-0 podman[254664]: 2025-11-24 22:32:41.555676731 +0000 UTC m=+0.102901360 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:32:41 compute-0 nova_compute[189608]: 2025-11-24 22:32:41.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:32:41 compute-0 nova_compute[189608]: 2025-11-24 22:32:41.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:32:42 compute-0 nova_compute[189608]: 2025-11-24 22:32:42.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:32:42 compute-0 nova_compute[189608]: 2025-11-24 22:32:42.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:32:44 compute-0 nova_compute[189608]: 2025-11-24 22:32:44.132 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:44 compute-0 nova_compute[189608]: 2025-11-24 22:32:44.501 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:44 compute-0 nova_compute[189608]: 2025-11-24 22:32:44.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:32:45 compute-0 sshd-session[254662]: Invalid user dqi from 185.217.1.246 port 16781
Nov 24 22:32:45 compute-0 podman[254685]: 2025-11-24 22:32:45.146928179 +0000 UTC m=+0.108833273 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible)
Nov 24 22:32:45 compute-0 ovn_controller[97889]: 2025-11-24T22:32:45Z|00176|binding|INFO|Releasing lport ce3870c0-48db-470b-8d5d-479134c9b554 from this chassis (sb_readonly=0)
Nov 24 22:32:45 compute-0 ovn_controller[97889]: 2025-11-24T22:32:45Z|00177|binding|INFO|Releasing lport ea50a485-7290-4ceb-bc3f-d004948a33d1 from this chassis (sb_readonly=0)
Nov 24 22:32:45 compute-0 nova_compute[189608]: 2025-11-24 22:32:45.716 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:46 compute-0 sshd-session[254662]: Disconnecting invalid user dqi 185.217.1.246 port 16781: Change of username or service not allowed: (dqi,ssh-connection) -> (super,ssh-connection) [preauth]
Nov 24 22:32:46 compute-0 podman[254705]: 2025-11-24 22:32:46.604816378 +0000 UTC m=+0.142967813 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 24 22:32:48 compute-0 nova_compute[189608]: 2025-11-24 22:32:48.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:32:48 compute-0 nova_compute[189608]: 2025-11-24 22:32:48.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:32:49 compute-0 nova_compute[189608]: 2025-11-24 22:32:49.134 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:49 compute-0 nova_compute[189608]: 2025-11-24 22:32:49.474 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764023554.472526, f238e71a-660e-497c-8472-193245387bcf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:32:49 compute-0 nova_compute[189608]: 2025-11-24 22:32:49.475 189613 INFO nova.compute.manager [-] [instance: f238e71a-660e-497c-8472-193245387bcf] VM Stopped (Lifecycle Event)
Nov 24 22:32:49 compute-0 nova_compute[189608]: 2025-11-24 22:32:49.495 189613 DEBUG nova.compute.manager [None req-1010b9f6-e9f3-408f-a9e5-88e766763642 - - - - - -] [instance: f238e71a-660e-497c-8472-193245387bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:32:49 compute-0 nova_compute[189608]: 2025-11-24 22:32:49.504 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:51 compute-0 ovn_controller[97889]: 2025-11-24T22:32:51Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:fb:5f 10.100.0.7
Nov 24 22:32:51 compute-0 ovn_controller[97889]: 2025-11-24T22:32:51Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:fb:5f 10.100.0.7
Nov 24 22:32:51 compute-0 sshd-session[254726]: Invalid user super from 185.217.1.246 port 46646
Nov 24 22:32:51 compute-0 sshd-session[254726]: Disconnecting invalid user super 185.217.1.246 port 46646: Change of username or service not allowed: (super,ssh-connection) -> (administrator,ssh-connection) [preauth]
Nov 24 22:32:54 compute-0 nova_compute[189608]: 2025-11-24 22:32:54.139 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:54 compute-0 nova_compute[189608]: 2025-11-24 22:32:54.508 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:54.597 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:32:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:54.599 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:32:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:32:54.601 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:32:55 compute-0 podman[254743]: 2025-11-24 22:32:55.595771848 +0000 UTC m=+0.120912338 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 24 22:32:55 compute-0 podman[254744]: 2025-11-24 22:32:55.6100056 +0000 UTC m=+0.137140593 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:32:55 compute-0 podman[254742]: 2025-11-24 22:32:55.621224468 +0000 UTC m=+0.157036679 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, distribution-scope=public, maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, container_name=kepler, architecture=x86_64, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 24 22:32:58 compute-0 sshd-session[254740]: Invalid user administrator from 185.217.1.246 port 4012
Nov 24 22:32:59 compute-0 nova_compute[189608]: 2025-11-24 22:32:59.143 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:59 compute-0 nova_compute[189608]: 2025-11-24 22:32:59.512 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:32:59 compute-0 sshd-session[254740]: Disconnecting invalid user administrator 185.217.1.246 port 4012: Change of username or service not allowed: (administrator,ssh-connection) -> (syncthing,ssh-connecti [preauth]
Nov 24 22:32:59 compute-0 podman[203795]: time="2025-11-24T22:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:32:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30758 "" "Go-http-client/1.1"
Nov 24 22:32:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5271 "" "Go-http-client/1.1"
Nov 24 22:33:00 compute-0 podman[254799]: 2025-11-24 22:33:00.53212686 +0000 UTC m=+0.080589225 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:33:01 compute-0 openstack_network_exporter[205945]: ERROR   22:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:33:01 compute-0 openstack_network_exporter[205945]: ERROR   22:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:33:01 compute-0 openstack_network_exporter[205945]: ERROR   22:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:33:01 compute-0 openstack_network_exporter[205945]: ERROR   22:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:33:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:33:01 compute-0 openstack_network_exporter[205945]: ERROR   22:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:33:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:33:02 compute-0 podman[254825]: 2025-11-24 22:33:02.530765962 +0000 UTC m=+0.082539445 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:33:02 compute-0 podman[254824]: 2025-11-24 22:33:02.587961699 +0000 UTC m=+0.146632307 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 24 22:33:04 compute-0 nova_compute[189608]: 2025-11-24 22:33:04.147 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:04 compute-0 sshd-session[254823]: Invalid user syncthing from 185.217.1.246 port 39766
Nov 24 22:33:04 compute-0 nova_compute[189608]: 2025-11-24 22:33:04.515 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:06 compute-0 sshd-session[254823]: Disconnecting invalid user syncthing 185.217.1.246 port 39766: Change of username or service not allowed: (syncthing,ssh-connection) -> (anonymous,ssh-connection) [preauth]
Nov 24 22:33:09 compute-0 nova_compute[189608]: 2025-11-24 22:33:09.149 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:09 compute-0 nova_compute[189608]: 2025-11-24 22:33:09.518 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:12 compute-0 podman[254871]: 2025-11-24 22:33:12.526824542 +0000 UTC m=+0.075131985 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:33:13 compute-0 sshd-session[254869]: Invalid user anonymous from 185.217.1.246 port 60724
Nov 24 22:33:14 compute-0 nova_compute[189608]: 2025-11-24 22:33:14.153 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:14 compute-0 nova_compute[189608]: 2025-11-24 22:33:14.521 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:15 compute-0 podman[254894]: 2025-11-24 22:33:15.55612337 +0000 UTC m=+0.103537429 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 24 22:33:15 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:15.723 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:33:15 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:15.724 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:33:15 compute-0 nova_compute[189608]: 2025-11-24 22:33:15.723 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:17 compute-0 podman[254912]: 2025-11-24 22:33:17.606248331 +0000 UTC m=+0.147038429 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 22:33:18 compute-0 sshd-session[254869]: error: maximum authentication attempts exceeded for invalid user anonymous from 185.217.1.246 port 60724 ssh2 [preauth]
Nov 24 22:33:18 compute-0 sshd-session[254869]: Disconnecting invalid user anonymous 185.217.1.246 port 60724: Too many authentication failures [preauth]
Nov 24 22:33:18 compute-0 nova_compute[189608]: 2025-11-24 22:33:18.967 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "715e08a7-7174-4e14-a83d-67aab18333d8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:18 compute-0 nova_compute[189608]: 2025-11-24 22:33:18.968 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:18 compute-0 nova_compute[189608]: 2025-11-24 22:33:18.985 189613 DEBUG nova.compute.manager [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.070 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.071 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.089 189613 DEBUG nova.virt.hardware [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.090 189613 INFO nova.compute.claims [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Claim successful on node compute-0.ctlplane.example.com
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.159 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.252 189613 DEBUG nova.compute.provider_tree [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.268 189613 DEBUG nova.scheduler.client.report [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.291 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.292 189613 DEBUG nova.compute.manager [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.357 189613 DEBUG nova.compute.manager [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.358 189613 DEBUG nova.network.neutron [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.389 189613 INFO nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.425 189613 DEBUG nova.compute.manager [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.523 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.535 189613 DEBUG nova.compute.manager [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.536 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.537 189613 INFO nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Creating image(s)
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.537 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "/var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.538 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "/var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.538 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "/var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.550 189613 DEBUG oslo_concurrency.processutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.613 189613 DEBUG oslo_concurrency.processutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.615 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "e3114b07aff678ef05dd12aafd3a42953942e41b" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.615 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "e3114b07aff678ef05dd12aafd3a42953942e41b" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.637 189613 DEBUG oslo_concurrency.processutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.695 189613 DEBUG oslo_concurrency.processutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.696 189613 DEBUG oslo_concurrency.processutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b,backing_fmt=raw /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.743 189613 DEBUG oslo_concurrency.processutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b,backing_fmt=raw /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.745 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "e3114b07aff678ef05dd12aafd3a42953942e41b" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.746 189613 DEBUG oslo_concurrency.processutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.812 189613 DEBUG oslo_concurrency.processutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.814 189613 DEBUG nova.virt.disk.api [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Checking if we can resize image /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.815 189613 DEBUG oslo_concurrency.processutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.894 189613 DEBUG oslo_concurrency.processutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.896 189613 DEBUG nova.virt.disk.api [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Cannot resize image /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.897 189613 DEBUG nova.objects.instance [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lazy-loading 'migration_context' on Instance uuid 715e08a7-7174-4e14-a83d-67aab18333d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.914 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.915 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Ensure instance console log exists: /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.915 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.916 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:19 compute-0 nova_compute[189608]: 2025-11-24 22:33:19.916 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:20 compute-0 nova_compute[189608]: 2025-11-24 22:33:20.088 189613 DEBUG nova.policy [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4a6957a775da42c9b535753d6b0279d6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 22:33:21 compute-0 nova_compute[189608]: 2025-11-24 22:33:21.528 189613 DEBUG nova.network.neutron [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Successfully created port: 9d8978ca-0c88-4b94-bebb-cca47795447e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 22:33:22 compute-0 nova_compute[189608]: 2025-11-24 22:33:22.126 189613 DEBUG nova.network.neutron [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Successfully updated port: 9d8978ca-0c88-4b94-bebb-cca47795447e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 22:33:22 compute-0 nova_compute[189608]: 2025-11-24 22:33:22.142 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:33:22 compute-0 nova_compute[189608]: 2025-11-24 22:33:22.143 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquired lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:33:22 compute-0 nova_compute[189608]: 2025-11-24 22:33:22.144 189613 DEBUG nova.network.neutron [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 22:33:22 compute-0 nova_compute[189608]: 2025-11-24 22:33:22.240 189613 DEBUG nova.compute.manager [req-a3f29d50-00ea-4110-94de-605e59fc05fd req-90fd0fb5-6815-4e19-b3e5-e36e69e7eafc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received event network-changed-9d8978ca-0c88-4b94-bebb-cca47795447e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:33:22 compute-0 nova_compute[189608]: 2025-11-24 22:33:22.243 189613 DEBUG nova.compute.manager [req-a3f29d50-00ea-4110-94de-605e59fc05fd req-90fd0fb5-6815-4e19-b3e5-e36e69e7eafc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Refreshing instance network info cache due to event network-changed-9d8978ca-0c88-4b94-bebb-cca47795447e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 22:33:22 compute-0 nova_compute[189608]: 2025-11-24 22:33:22.244 189613 DEBUG oslo_concurrency.lockutils [req-a3f29d50-00ea-4110-94de-605e59fc05fd req-90fd0fb5-6815-4e19-b3e5-e36e69e7eafc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:33:22 compute-0 nova_compute[189608]: 2025-11-24 22:33:22.273 189613 DEBUG nova.network.neutron [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.109 189613 DEBUG nova.network.neutron [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Updating instance_info_cache with network_info: [{"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.136 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Releasing lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.137 189613 DEBUG nova.compute.manager [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Instance network_info: |[{"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.139 189613 DEBUG oslo_concurrency.lockutils [req-a3f29d50-00ea-4110-94de-605e59fc05fd req-90fd0fb5-6815-4e19-b3e5-e36e69e7eafc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquired lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.140 189613 DEBUG nova.network.neutron [req-a3f29d50-00ea-4110-94de-605e59fc05fd req-90fd0fb5-6815-4e19-b3e5-e36e69e7eafc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Refreshing network info cache for port 9d8978ca-0c88-4b94-bebb-cca47795447e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.145 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Start _get_guest_xml network_info=[{"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:29:48Z,direct_url=<?>,disk_format='qcow2',id=ea88776c-3c0b-4e74-99b4-08aadc81390f,min_disk=0,min_ram=0,name='tempest-scenario-img--1781237514',owner='4a6957a775da42c9b535753d6b0279d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:29:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'guest_format': None, 'image_id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.167 189613 WARNING nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.175 189613 DEBUG nova.virt.libvirt.host [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.177 189613 DEBUG nova.virt.libvirt.host [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.184 189613 DEBUG nova.virt.libvirt.host [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.185 189613 DEBUG nova.virt.libvirt.host [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.186 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.187 189613 DEBUG nova.virt.hardware [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T22:28:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a49f1e6c-1051-4dea-812e-0063121444a0',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T22:29:48Z,direct_url=<?>,disk_format='qcow2',id=ea88776c-3c0b-4e74-99b4-08aadc81390f,min_disk=0,min_ram=0,name='tempest-scenario-img--1781237514',owner='4a6957a775da42c9b535753d6b0279d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T22:29:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.188 189613 DEBUG nova.virt.hardware [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.189 189613 DEBUG nova.virt.hardware [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.190 189613 DEBUG nova.virt.hardware [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.191 189613 DEBUG nova.virt.hardware [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.191 189613 DEBUG nova.virt.hardware [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.192 189613 DEBUG nova.virt.hardware [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.193 189613 DEBUG nova.virt.hardware [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.194 189613 DEBUG nova.virt.hardware [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.194 189613 DEBUG nova.virt.hardware [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.196 189613 DEBUG nova.virt.hardware [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.202 189613 DEBUG nova.virt.libvirt.vif [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:33:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo',id=15,image_ref='ea88776c-3c0b-4e74-99b4-08aadc81390f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='c6477657-e9b0-476c-83b3-9dc474e946c6'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a6957a775da42c9b535753d6b0279d6',ramdisk_id='',reservation_id='r-051484o2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ea88776c-3c0b-4e74-99b4-08aadc81390f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-332462970',owner_user_name='tempest-PrometheusGabbiTest-332462970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:33:19Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='fcf527fb124b42b9ab6a20cc0938b39f',uuid=715e08a7-7174-4e14-a83d-67aab18333d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.204 189613 DEBUG nova.network.os_vif_util [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Converting VIF {"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.205 189613 DEBUG nova.network.os_vif_util [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:93:b1,bridge_name='br-int',has_traffic_filtering=True,id=9d8978ca-0c88-4b94-bebb-cca47795447e,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d8978ca-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.207 189613 DEBUG nova.objects.instance [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 715e08a7-7174-4e14-a83d-67aab18333d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.251 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] End _get_guest_xml xml=<domain type="kvm">
Nov 24 22:33:23 compute-0 nova_compute[189608]:   <uuid>715e08a7-7174-4e14-a83d-67aab18333d8</uuid>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   <name>instance-0000000f</name>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   <memory>131072</memory>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   <vcpu>1</vcpu>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   <metadata>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <nova:name>te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo</nova:name>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <nova:creationTime>2025-11-24 22:33:23</nova:creationTime>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <nova:flavor name="m1.nano">
Nov 24 22:33:23 compute-0 nova_compute[189608]:         <nova:memory>128</nova:memory>
Nov 24 22:33:23 compute-0 nova_compute[189608]:         <nova:disk>1</nova:disk>
Nov 24 22:33:23 compute-0 nova_compute[189608]:         <nova:swap>0</nova:swap>
Nov 24 22:33:23 compute-0 nova_compute[189608]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 22:33:23 compute-0 nova_compute[189608]:         <nova:vcpus>1</nova:vcpus>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       </nova:flavor>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <nova:owner>
Nov 24 22:33:23 compute-0 nova_compute[189608]:         <nova:user uuid="fcf527fb124b42b9ab6a20cc0938b39f">tempest-PrometheusGabbiTest-332462970-project-member</nova:user>
Nov 24 22:33:23 compute-0 nova_compute[189608]:         <nova:project uuid="4a6957a775da42c9b535753d6b0279d6">tempest-PrometheusGabbiTest-332462970</nova:project>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       </nova:owner>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <nova:root type="image" uuid="ea88776c-3c0b-4e74-99b4-08aadc81390f"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <nova:ports>
Nov 24 22:33:23 compute-0 nova_compute[189608]:         <nova:port uuid="9d8978ca-0c88-4b94-bebb-cca47795447e">
Nov 24 22:33:23 compute-0 nova_compute[189608]:           <nova:ip type="fixed" address="10.100.0.203" ipVersion="4"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:         </nova:port>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       </nova:ports>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     </nova:instance>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   </metadata>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   <sysinfo type="smbios">
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <system>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <entry name="manufacturer">RDO</entry>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <entry name="product">OpenStack Compute</entry>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <entry name="serial">715e08a7-7174-4e14-a83d-67aab18333d8</entry>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <entry name="uuid">715e08a7-7174-4e14-a83d-67aab18333d8</entry>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <entry name="family">Virtual Machine</entry>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     </system>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   </sysinfo>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   <os>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <boot dev="hd"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <smbios mode="sysinfo"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   </os>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   <features>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <acpi/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <apic/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <vmcoreinfo/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   </features>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   <clock offset="utc">
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <timer name="hpet" present="no"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   </clock>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   <cpu mode="host-model" match="exact">
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   </cpu>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   <devices>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <disk type="file" device="disk">
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <target dev="vda" bus="virtio"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <disk type="file" device="cdrom">
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <driver name="qemu" type="raw" cache="none"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <source file="/var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk.config"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <target dev="sda" bus="sata"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     </disk>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <interface type="ethernet">
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <mac address="fa:16:3e:be:93:b1"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <mtu size="1442"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <target dev="tap9d8978ca-0c"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     </interface>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <serial type="pty">
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <log file="/var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/console.log" append="off"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     </serial>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <video>
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <model type="virtio"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     </video>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <input type="tablet" bus="usb"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <rng model="virtio">
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <backend model="random">/dev/urandom</backend>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     </rng>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <controller type="usb" index="0"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     <memballoon model="virtio">
Nov 24 22:33:23 compute-0 nova_compute[189608]:       <stats period="10"/>
Nov 24 22:33:23 compute-0 nova_compute[189608]:     </memballoon>
Nov 24 22:33:23 compute-0 nova_compute[189608]:   </devices>
Nov 24 22:33:23 compute-0 nova_compute[189608]: </domain>
Nov 24 22:33:23 compute-0 nova_compute[189608]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.253 189613 DEBUG nova.compute.manager [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Preparing to wait for external event network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.253 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.253 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.254 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.255 189613 DEBUG nova.virt.libvirt.vif [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T22:33:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo',id=15,image_ref='ea88776c-3c0b-4e74-99b4-08aadc81390f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='c6477657-e9b0-476c-83b3-9dc474e946c6'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a6957a775da42c9b535753d6b0279d6',ramdisk_id='',reservation_id='r-051484o2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ea88776c-3c0b-4e74-99b4-08aadc81390f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-332462970',owner_user_name='tempest-PrometheusGabbiTest-332462970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T22:33:19Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='fcf527fb124b42b9ab6a20cc0938b39f',uuid=715e08a7-7174-4e14-a83d-67aab18333d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.255 189613 DEBUG nova.network.os_vif_util [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Converting VIF {"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.256 189613 DEBUG nova.network.os_vif_util [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:93:b1,bridge_name='br-int',has_traffic_filtering=True,id=9d8978ca-0c88-4b94-bebb-cca47795447e,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d8978ca-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.257 189613 DEBUG os_vif [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:93:b1,bridge_name='br-int',has_traffic_filtering=True,id=9d8978ca-0c88-4b94-bebb-cca47795447e,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d8978ca-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.257 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.258 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.259 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.264 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.264 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d8978ca-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.265 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d8978ca-0c, col_values=(('external_ids', {'iface-id': '9d8978ca-0c88-4b94-bebb-cca47795447e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:93:b1', 'vm-uuid': '715e08a7-7174-4e14-a83d-67aab18333d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:33:23 compute-0 NetworkManager[56413]: <info>  [1764023603.2685] manager: (tap9d8978ca-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.270 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.281 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.282 189613 INFO os_vif [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:93:b1,bridge_name='br-int',has_traffic_filtering=True,id=9d8978ca-0c88-4b94-bebb-cca47795447e,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d8978ca-0c')
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.343 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.343 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.344 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] No VIF found with MAC fa:16:3e:be:93:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.344 189613 INFO nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Using config drive
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.749 189613 INFO nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Creating config drive at /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk.config
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.767 189613 DEBUG oslo_concurrency.processutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp09i0et22 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:33:23 compute-0 nova_compute[189608]: 2025-11-24 22:33:23.903 189613 DEBUG oslo_concurrency.processutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp09i0et22" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:33:23 compute-0 kernel: tap9d8978ca-0c: entered promiscuous mode
Nov 24 22:33:23 compute-0 NetworkManager[56413]: <info>  [1764023603.9950] manager: (tap9d8978ca-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Nov 24 22:33:23 compute-0 ovn_controller[97889]: 2025-11-24T22:33:23Z|00178|binding|INFO|Claiming lport 9d8978ca-0c88-4b94-bebb-cca47795447e for this chassis.
Nov 24 22:33:23 compute-0 ovn_controller[97889]: 2025-11-24T22:33:23Z|00179|binding|INFO|9d8978ca-0c88-4b94-bebb-cca47795447e: Claiming fa:16:3e:be:93:b1 10.100.0.203
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.000 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.011 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:93:b1 10.100.0.203'], port_security=['fa:16:3e:be:93:b1 10.100.0.203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.203/16', 'neutron:device_id': '715e08a7-7174-4e14-a83d-67aab18333d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a6957a775da42c9b535753d6b0279d6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24045f91-3265-40cf-b7b6-d2589223975b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3186cdf0-d894-4e3e-a84d-b369c1fcfb08, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=9d8978ca-0c88-4b94-bebb-cca47795447e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.012 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 9d8978ca-0c88-4b94-bebb-cca47795447e in datapath a164481b-21c8-4cae-a6e9-b470d8a55a1f bound to our chassis
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.015 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a164481b-21c8-4cae-a6e9-b470d8a55a1f
Nov 24 22:33:24 compute-0 ovn_controller[97889]: 2025-11-24T22:33:24Z|00180|binding|INFO|Setting lport 9d8978ca-0c88-4b94-bebb-cca47795447e ovn-installed in OVS
Nov 24 22:33:24 compute-0 ovn_controller[97889]: 2025-11-24T22:33:24Z|00181|binding|INFO|Setting lport 9d8978ca-0c88-4b94-bebb-cca47795447e up in Southbound
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.019 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.023 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:24 compute-0 systemd-machined[155884]: New machine qemu-16-instance-0000000f.
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.060 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[1e938a8f-feb7-47f5-a069-8eee58176e69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:24 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Nov 24 22:33:24 compute-0 systemd-udevd[254973]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.108 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[711adb9b-3717-4e80-8a16-7c50db1a60ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.113 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f5605c-952e-4b00-8a67-0aed6303747c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:24 compute-0 NetworkManager[56413]: <info>  [1764023604.1266] device (tap9d8978ca-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 22:33:24 compute-0 NetworkManager[56413]: <info>  [1764023604.1281] device (tap9d8978ca-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.161 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.172 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[e38360ad-fe9b-419c-bd2e-047f0caec002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.196 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[266dd762-a0a0-41db-9a5e-f2fd0d57d112]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa164481b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:a0:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526439, 'reachable_time': 24498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254984, 'error': None, 'target': 'ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.215 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[60dff717-9674-4840-8ca7-42592b01f482]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapa164481b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526458, 'tstamp': 526458}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254985, 'error': None, 'target': 'ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa164481b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526464, 'tstamp': 526464}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254985, 'error': None, 'target': 'ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.218 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa164481b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.221 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.223 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa164481b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.224 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.224 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa164481b-20, col_values=(('external_ids', {'iface-id': 'ce3870c0-48db-470b-8d5d-479134c9b554'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:33:24 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:24.225 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.377 189613 DEBUG nova.compute.manager [req-2ddd7da7-ccc2-4e92-a690-9ce9ad922b11 req-47ee62da-d73e-49c7-a500-1545403138a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received event network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.379 189613 DEBUG oslo_concurrency.lockutils [req-2ddd7da7-ccc2-4e92-a690-9ce9ad922b11 req-47ee62da-d73e-49c7-a500-1545403138a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.379 189613 DEBUG oslo_concurrency.lockutils [req-2ddd7da7-ccc2-4e92-a690-9ce9ad922b11 req-47ee62da-d73e-49c7-a500-1545403138a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.380 189613 DEBUG oslo_concurrency.lockutils [req-2ddd7da7-ccc2-4e92-a690-9ce9ad922b11 req-47ee62da-d73e-49c7-a500-1545403138a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.381 189613 DEBUG nova.compute.manager [req-2ddd7da7-ccc2-4e92-a690-9ce9ad922b11 req-47ee62da-d73e-49c7-a500-1545403138a1 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Processing event network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.438 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023604.4380152, 715e08a7-7174-4e14-a83d-67aab18333d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.438 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] VM Started (Lifecycle Event)
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.440 189613 DEBUG nova.compute.manager [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.445 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.450 189613 INFO nova.virt.libvirt.driver [-] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Instance spawned successfully.
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.450 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.453 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.459 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.471 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.472 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.473 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.473 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.474 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.475 189613 DEBUG nova.virt.libvirt.driver [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.479 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.480 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023604.4381144, 715e08a7-7174-4e14-a83d-67aab18333d8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.480 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] VM Paused (Lifecycle Event)
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.507 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.513 189613 DEBUG nova.virt.driver [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] Emitting event <LifecycleEvent: 1764023604.4438572, 715e08a7-7174-4e14-a83d-67aab18333d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.514 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] VM Resumed (Lifecycle Event)
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.534 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.536 189613 INFO nova.compute.manager [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Took 5.00 seconds to spawn the instance on the hypervisor.
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.537 189613 DEBUG nova.compute.manager [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.543 189613 DEBUG nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.566 189613 INFO nova.compute.manager [None req-3be1aebd-0888-4109-a4cf-74a7ce3f5e70 - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.596 189613 INFO nova.compute.manager [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Took 5.56 seconds to build instance.
Nov 24 22:33:24 compute-0 nova_compute[189608]: 2025-11-24 22:33:24.610 189613 DEBUG oslo_concurrency.lockutils [None req-73eb89ba-a8d7-4a9b-a2d0-9f882d113dc9 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:25 compute-0 nova_compute[189608]: 2025-11-24 22:33:25.090 189613 DEBUG nova.network.neutron [req-a3f29d50-00ea-4110-94de-605e59fc05fd req-90fd0fb5-6815-4e19-b3e5-e36e69e7eafc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Updated VIF entry in instance network info cache for port 9d8978ca-0c88-4b94-bebb-cca47795447e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 22:33:25 compute-0 nova_compute[189608]: 2025-11-24 22:33:25.091 189613 DEBUG nova.network.neutron [req-a3f29d50-00ea-4110-94de-605e59fc05fd req-90fd0fb5-6815-4e19-b3e5-e36e69e7eafc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Updating instance_info_cache with network_info: [{"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:33:25 compute-0 nova_compute[189608]: 2025-11-24 22:33:25.110 189613 DEBUG oslo_concurrency.lockutils [req-a3f29d50-00ea-4110-94de-605e59fc05fd req-90fd0fb5-6815-4e19-b3e5-e36e69e7eafc c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Releasing lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:33:25 compute-0 sshd-session[254949]: Invalid user anonymous from 185.217.1.246 port 53153
Nov 24 22:33:25 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:25.726 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:33:26 compute-0 nova_compute[189608]: 2025-11-24 22:33:26.469 189613 DEBUG nova.compute.manager [req-5059873a-070c-49a6-955c-fdcf30729d44 req-a037c30a-e020-4483-9e78-53b41e758425 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received event network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:33:26 compute-0 nova_compute[189608]: 2025-11-24 22:33:26.470 189613 DEBUG oslo_concurrency.lockutils [req-5059873a-070c-49a6-955c-fdcf30729d44 req-a037c30a-e020-4483-9e78-53b41e758425 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:26 compute-0 nova_compute[189608]: 2025-11-24 22:33:26.470 189613 DEBUG oslo_concurrency.lockutils [req-5059873a-070c-49a6-955c-fdcf30729d44 req-a037c30a-e020-4483-9e78-53b41e758425 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:26 compute-0 nova_compute[189608]: 2025-11-24 22:33:26.470 189613 DEBUG oslo_concurrency.lockutils [req-5059873a-070c-49a6-955c-fdcf30729d44 req-a037c30a-e020-4483-9e78-53b41e758425 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:26 compute-0 nova_compute[189608]: 2025-11-24 22:33:26.470 189613 DEBUG nova.compute.manager [req-5059873a-070c-49a6-955c-fdcf30729d44 req-a037c30a-e020-4483-9e78-53b41e758425 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] No waiting events found dispatching network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:33:26 compute-0 nova_compute[189608]: 2025-11-24 22:33:26.471 189613 WARNING nova.compute.manager [req-5059873a-070c-49a6-955c-fdcf30729d44 req-a037c30a-e020-4483-9e78-53b41e758425 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received unexpected event network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e for instance with vm_state active and task_state None.
Nov 24 22:33:26 compute-0 podman[254994]: 2025-11-24 22:33:26.532541211 +0000 UTC m=+0.078982425 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., vcs-type=git, release=1755695350, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 24 22:33:26 compute-0 podman[254993]: 2025-11-24 22:33:26.565497425 +0000 UTC m=+0.111724632 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 24 22:33:26 compute-0 podman[254995]: 2025-11-24 22:33:26.566465936 +0000 UTC m=+0.111180216 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:33:27 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:27.295 106886 DEBUG eventlet.wsgi.server [-] (106886) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 24 22:33:27 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:27.298 106886 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Nov 24 22:33:27 compute-0 ovn_metadata_agent[106771]: Accept: */*
Nov 24 22:33:27 compute-0 ovn_metadata_agent[106771]: Connection: close
Nov 24 22:33:27 compute-0 ovn_metadata_agent[106771]: Content-Type: text/plain
Nov 24 22:33:27 compute-0 ovn_metadata_agent[106771]: Host: 169.254.169.254
Nov 24 22:33:27 compute-0 ovn_metadata_agent[106771]: User-Agent: curl/7.84.0
Nov 24 22:33:27 compute-0 ovn_metadata_agent[106771]: X-Forwarded-For: 10.100.0.7
Nov 24 22:33:27 compute-0 ovn_metadata_agent[106771]: X-Ovn-Network-Id: 4153540b-1891-459a-9fd1-3ba9595f1a33 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 24 22:33:28 compute-0 nova_compute[189608]: 2025-11-24 22:33:28.269 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:28 compute-0 nova_compute[189608]: 2025-11-24 22:33:28.790 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:33:29 compute-0 nova_compute[189608]: 2025-11-24 22:33:29.166 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:29.361 106886 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:29.363 106886 INFO eventlet.wsgi.server [-] 10.100.0.7,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 2.0654638
Nov 24 22:33:29 compute-0 haproxy-metadata-proxy-4153540b-1891-459a-9fd1-3ba9595f1a33[254439]: 10.100.0.7:35114 [24/Nov/2025:22:33:27.294] listener listener/metadata 0/0/0/2069/2069 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:29.490 106886 DEBUG eventlet.wsgi.server [-] (106886) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:29.493 106886 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: Accept: */*
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: Connection: close
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: Content-Length: 100
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: Content-Type: application/x-www-form-urlencoded
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: Host: 169.254.169.254
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: User-Agent: curl/7.84.0
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: X-Forwarded-For: 10.100.0.7
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: X-Ovn-Network-Id: 4153540b-1891-459a-9fd1-3ba9595f1a33
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: 
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 24 22:33:29 compute-0 podman[203795]: time="2025-11-24T22:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:33:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30758 "" "Go-http-client/1.1"
Nov 24 22:33:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5273 "" "Go-http-client/1.1"
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:29.796 106886 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 24 22:33:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:29.796 106886 INFO eventlet.wsgi.server [-] 10.100.0.7,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.3032391
Nov 24 22:33:29 compute-0 haproxy-metadata-proxy-4153540b-1891-459a-9fd1-3ba9595f1a33[254439]: 10.100.0.7:38614 [24/Nov/2025:22:33:29.489] listener listener/metadata 0/0/0/308/308 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 24 22:33:30 compute-0 sshd-session[254949]: Disconnecting invalid user anonymous 185.217.1.246 port 53153: Change of username or service not allowed: (anonymous,ssh-connection) -> (tunnel,ssh-connection) [preauth]
Nov 24 22:33:31 compute-0 openstack_network_exporter[205945]: ERROR   22:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:33:31 compute-0 openstack_network_exporter[205945]: ERROR   22:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:33:31 compute-0 openstack_network_exporter[205945]: ERROR   22:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:33:31 compute-0 openstack_network_exporter[205945]: ERROR   22:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:33:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:33:31 compute-0 openstack_network_exporter[205945]: ERROR   22:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:33:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:33:31 compute-0 podman[255047]: 2025-11-24 22:33:31.539041284 +0000 UTC m=+0.092841916 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.059 189613 DEBUG oslo_concurrency.lockutils [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Acquiring lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.060 189613 DEBUG oslo_concurrency.lockutils [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.060 189613 DEBUG oslo_concurrency.lockutils [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Acquiring lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.061 189613 DEBUG oslo_concurrency.lockutils [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.061 189613 DEBUG oslo_concurrency.lockutils [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.063 189613 INFO nova.compute.manager [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Terminating instance
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.064 189613 DEBUG nova.compute.manager [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:33:32 compute-0 kernel: tap91fe5820-9c (unregistering): left promiscuous mode
Nov 24 22:33:32 compute-0 NetworkManager[56413]: <info>  [1764023612.1162] device (tap91fe5820-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:33:32 compute-0 ovn_controller[97889]: 2025-11-24T22:33:32Z|00182|binding|INFO|Releasing lport 91fe5820-9c04-4bb0-94bb-b6c3068e81e1 from this chassis (sb_readonly=0)
Nov 24 22:33:32 compute-0 ovn_controller[97889]: 2025-11-24T22:33:32Z|00183|binding|INFO|Setting lport 91fe5820-9c04-4bb0-94bb-b6c3068e81e1 down in Southbound
Nov 24 22:33:32 compute-0 ovn_controller[97889]: 2025-11-24T22:33:32Z|00184|binding|INFO|Removing iface tap91fe5820-9c ovn-installed in OVS
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.153 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.176 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.179 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:fb:5f 10.100.0.7'], port_security=['fa:16:3e:dd:fb:5f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b256c629-ce31-4c1a-a7a6-ed66c07e691a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4153540b-1891-459a-9fd1-3ba9595f1a33', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b288cf23cdd049f48dfaafd888b33ea5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '521c979b-bc8e-4599-9106-34a19dacb3c4 7e4ca47e-ffcf-44c4-8bda-8d7d73d6409e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dd1b5c5-bbef-41d9-9591-78d276d66648, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=91fe5820-9c04-4bb0-94bb-b6c3068e81e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.181 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 91fe5820-9c04-4bb0-94bb-b6c3068e81e1 in datapath 4153540b-1891-459a-9fd1-3ba9595f1a33 unbound from our chassis
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.183 106776 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4153540b-1891-459a-9fd1-3ba9595f1a33, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.186 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[11e07b9d-693c-40d7-bfb1-c9c36e1a5bf6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.187 106776 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33 namespace which is not needed anymore
Nov 24 22:33:32 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 24 22:33:32 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 41.318s CPU time.
Nov 24 22:33:32 compute-0 systemd-machined[155884]: Machine qemu-15-instance-0000000e terminated.
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.348 189613 INFO nova.virt.libvirt.driver [-] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Instance destroyed successfully.
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.352 189613 DEBUG nova.objects.instance [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lazy-loading 'resources' on Instance uuid b256c629-ce31-4c1a-a7a6-ed66c07e691a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.384 189613 DEBUG nova.virt.libvirt.vif [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:32:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1941519261',display_name='tempest-TestServerBasicOps-server-1941519261',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1941519261',id=14,image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHhiiOkq9iUOdXjNpHmjtbZnprF40l9Ok8bS+ZGzP1suJLrBZla3uFjyC2KQSh8EA0qUn9JfUv9Ai00BpomIa3LapwlsNuCD5RdfS+v0/E5gEvSnYUQgcor+P5+PXWxL+A==',key_name='tempest-TestServerBasicOps-633402408',keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:32:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b288cf23cdd049f48dfaafd888b33ea5',ramdisk_id='',reservation_id='r-05kp599q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ec71d7d5-c197-4331-bf8d-e2de71a8419f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-396408097',owner_user_name='tempest-TestServerBasicOps-396408097-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:33:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1ff3ad5c90cd47639553ad5015a81aca',uuid=b256c629-ce31-4c1a-a7a6-ed66c07e691a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "address": "fa:16:3e:dd:fb:5f", "network": {"id": "4153540b-1891-459a-9fd1-3ba9595f1a33", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1140659131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b288cf23cdd049f48dfaafd888b33ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91fe5820-9c", "ovs_interfaceid": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.386 189613 DEBUG nova.network.os_vif_util [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Converting VIF {"id": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "address": "fa:16:3e:dd:fb:5f", "network": {"id": "4153540b-1891-459a-9fd1-3ba9595f1a33", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1140659131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b288cf23cdd049f48dfaafd888b33ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91fe5820-9c", "ovs_interfaceid": "91fe5820-9c04-4bb0-94bb-b6c3068e81e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.388 189613 DEBUG nova.network.os_vif_util [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:fb:5f,bridge_name='br-int',has_traffic_filtering=True,id=91fe5820-9c04-4bb0-94bb-b6c3068e81e1,network=Network(4153540b-1891-459a-9fd1-3ba9595f1a33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91fe5820-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.389 189613 DEBUG os_vif [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:fb:5f,bridge_name='br-int',has_traffic_filtering=True,id=91fe5820-9c04-4bb0-94bb-b6c3068e81e1,network=Network(4153540b-1891-459a-9fd1-3ba9595f1a33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91fe5820-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.391 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.393 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91fe5820-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.404 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.408 189613 INFO os_vif [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:fb:5f,bridge_name='br-int',has_traffic_filtering=True,id=91fe5820-9c04-4bb0-94bb-b6c3068e81e1,network=Network(4153540b-1891-459a-9fd1-3ba9595f1a33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91fe5820-9c')
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.409 189613 INFO nova.virt.libvirt.driver [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Deleting instance files /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a_del
Nov 24 22:33:32 compute-0 neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33[254433]: [NOTICE]   (254437) : haproxy version is 2.8.14-c23fe91
Nov 24 22:33:32 compute-0 neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33[254433]: [NOTICE]   (254437) : path to executable is /usr/sbin/haproxy
Nov 24 22:33:32 compute-0 neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33[254433]: [WARNING]  (254437) : Exiting Master process...
Nov 24 22:33:32 compute-0 neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33[254433]: [ALERT]    (254437) : Current worker (254439) exited with code 143 (Terminated)
Nov 24 22:33:32 compute-0 neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33[254433]: [WARNING]  (254437) : All workers exited. Exiting... (0)
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.419 189613 INFO nova.virt.libvirt.driver [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Deletion of /var/lib/nova/instances/b256c629-ce31-4c1a-a7a6-ed66c07e691a_del complete
Nov 24 22:33:32 compute-0 systemd[1]: libpod-18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344.scope: Deactivated successfully.
Nov 24 22:33:32 compute-0 podman[255107]: 2025-11-24 22:33:32.427133928 +0000 UTC m=+0.072173464 container died 18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 22:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344-userdata-shm.mount: Deactivated successfully.
Nov 24 22:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bbd655e4dd91066498d36b5b5c8a4af74f1c56aba8acd268028f1fca84f58fc-merged.mount: Deactivated successfully.
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.509 189613 INFO nova.compute.manager [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Took 0.44 seconds to destroy the instance on the hypervisor.
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.511 189613 DEBUG oslo.service.loopingcall [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.512 189613 DEBUG nova.compute.manager [-] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.512 189613 DEBUG nova.network.neutron [-] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:33:32 compute-0 podman[255107]: 2025-11-24 22:33:32.536482575 +0000 UTC m=+0.181522101 container cleanup 18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:33:32 compute-0 systemd[1]: libpod-conmon-18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344.scope: Deactivated successfully.
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.614 189613 DEBUG nova.compute.manager [req-a709f95a-118d-4b39-b5d5-9c87db377245 req-4b74ea1f-55f6-40f4-9e69-1f9916440024 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Received event network-vif-unplugged-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.614 189613 DEBUG oslo_concurrency.lockutils [req-a709f95a-118d-4b39-b5d5-9c87db377245 req-4b74ea1f-55f6-40f4-9e69-1f9916440024 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.615 189613 DEBUG oslo_concurrency.lockutils [req-a709f95a-118d-4b39-b5d5-9c87db377245 req-4b74ea1f-55f6-40f4-9e69-1f9916440024 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.616 189613 DEBUG oslo_concurrency.lockutils [req-a709f95a-118d-4b39-b5d5-9c87db377245 req-4b74ea1f-55f6-40f4-9e69-1f9916440024 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.616 189613 DEBUG nova.compute.manager [req-a709f95a-118d-4b39-b5d5-9c87db377245 req-4b74ea1f-55f6-40f4-9e69-1f9916440024 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] No waiting events found dispatching network-vif-unplugged-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.616 189613 DEBUG nova.compute.manager [req-a709f95a-118d-4b39-b5d5-9c87db377245 req-4b74ea1f-55f6-40f4-9e69-1f9916440024 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Received event network-vif-unplugged-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 22:33:32 compute-0 podman[255141]: 2025-11-24 22:33:32.657133054 +0000 UTC m=+0.084993981 container remove 18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.670 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[23aea3ec-63b4-43a2-82da-1969f1f131a7]: (4, ('Mon Nov 24 10:33:32 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33 (18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344)\n18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344\nMon Nov 24 10:33:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33 (18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344)\n18c0f99b9ba71ab04e170345759b0c9bbb4ac91c02710d130cb422595be20344\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.675 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[b83cc5ca-b3be-49e2-8198-bb3334426641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.677 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4153540b-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.679 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:32 compute-0 kernel: tap4153540b-10: left promiscuous mode
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.697 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.704 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[822f9323-5884-491a-99f7-1a88fd13f64b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.718 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ec52a4-ea03-4385-ba0e-b6989cf8d29b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.719 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[d7fd2c9a-459d-4d40-a39c-208f0f2345d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:32 compute-0 podman[255142]: 2025-11-24 22:33:32.724287961 +0000 UTC m=+0.136159561 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.741 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[26ef2bfd-f62a-4832-9a95-e5c51532f63d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538515, 'reachable_time': 33679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255179, 'error': None, 'target': 'ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.744 106891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4153540b-1891-459a-9fd1-3ba9595f1a33 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 22:33:32 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:32.744 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[9bdbb0e1-8ae7-4a80-8523-4cf5b32e2734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:33:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d4153540b\x2d1891\x2d459a\x2d9fd1\x2d3ba9595f1a33.mount: Deactivated successfully.
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:33:32 compute-0 nova_compute[189608]: 2025-11-24 22:33:32.812 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:33:32 compute-0 podman[255172]: 2025-11-24 22:33:32.842446512 +0000 UTC m=+0.122992452 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 22:33:33 compute-0 nova_compute[189608]: 2025-11-24 22:33:33.896 189613 DEBUG nova.network.neutron [-] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:33:33 compute-0 nova_compute[189608]: 2025-11-24 22:33:33.916 189613 INFO nova.compute.manager [-] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Took 1.40 seconds to deallocate network for instance.
Nov 24 22:33:33 compute-0 nova_compute[189608]: 2025-11-24 22:33:33.963 189613 DEBUG oslo_concurrency.lockutils [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:33 compute-0 nova_compute[189608]: 2025-11-24 22:33:33.965 189613 DEBUG oslo_concurrency.lockutils [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:34 compute-0 nova_compute[189608]: 2025-11-24 22:33:34.082 189613 DEBUG nova.compute.provider_tree [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:33:34 compute-0 nova_compute[189608]: 2025-11-24 22:33:34.096 189613 DEBUG nova.scheduler.client.report [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:33:34 compute-0 nova_compute[189608]: 2025-11-24 22:33:34.114 189613 DEBUG oslo_concurrency.lockutils [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:34 compute-0 nova_compute[189608]: 2025-11-24 22:33:34.146 189613 INFO nova.scheduler.client.report [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Deleted allocations for instance b256c629-ce31-4c1a-a7a6-ed66c07e691a
Nov 24 22:33:34 compute-0 nova_compute[189608]: 2025-11-24 22:33:34.169 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:34 compute-0 nova_compute[189608]: 2025-11-24 22:33:34.232 189613 DEBUG oslo_concurrency.lockutils [None req-c5aa0b81-e962-4e4e-a909-99ad624c4257 1ff3ad5c90cd47639553ad5015a81aca b288cf23cdd049f48dfaafd888b33ea5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:34 compute-0 nova_compute[189608]: 2025-11-24 22:33:34.735 189613 DEBUG nova.compute.manager [req-b4176e74-0447-483f-be07-0af5b9a58296 req-d2e86fb7-a64b-4b1b-8eca-df3bfe344925 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Received event network-vif-plugged-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:33:34 compute-0 nova_compute[189608]: 2025-11-24 22:33:34.736 189613 DEBUG oslo_concurrency.lockutils [req-b4176e74-0447-483f-be07-0af5b9a58296 req-d2e86fb7-a64b-4b1b-8eca-df3bfe344925 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:34 compute-0 nova_compute[189608]: 2025-11-24 22:33:34.736 189613 DEBUG oslo_concurrency.lockutils [req-b4176e74-0447-483f-be07-0af5b9a58296 req-d2e86fb7-a64b-4b1b-8eca-df3bfe344925 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:34 compute-0 nova_compute[189608]: 2025-11-24 22:33:34.736 189613 DEBUG oslo_concurrency.lockutils [req-b4176e74-0447-483f-be07-0af5b9a58296 req-d2e86fb7-a64b-4b1b-8eca-df3bfe344925 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "b256c629-ce31-4c1a-a7a6-ed66c07e691a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:34 compute-0 nova_compute[189608]: 2025-11-24 22:33:34.736 189613 DEBUG nova.compute.manager [req-b4176e74-0447-483f-be07-0af5b9a58296 req-d2e86fb7-a64b-4b1b-8eca-df3bfe344925 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] No waiting events found dispatching network-vif-plugged-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:33:34 compute-0 nova_compute[189608]: 2025-11-24 22:33:34.736 189613 WARNING nova.compute.manager [req-b4176e74-0447-483f-be07-0af5b9a58296 req-d2e86fb7-a64b-4b1b-8eca-df3bfe344925 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Received unexpected event network-vif-plugged-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 for instance with vm_state deleted and task_state None.
Nov 24 22:33:34 compute-0 nova_compute[189608]: 2025-11-24 22:33:34.737 189613 DEBUG nova.compute.manager [req-b4176e74-0447-483f-be07-0af5b9a58296 req-d2e86fb7-a64b-4b1b-8eca-df3bfe344925 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Received event network-vif-deleted-91fe5820-9c04-4bb0-94bb-b6c3068e81e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:33:35 compute-0 nova_compute[189608]: 2025-11-24 22:33:35.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:33:35 compute-0 nova_compute[189608]: 2025-11-24 22:33:35.829 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:35 compute-0 nova_compute[189608]: 2025-11-24 22:33:35.830 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:35 compute-0 nova_compute[189608]: 2025-11-24 22:33:35.831 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:35 compute-0 nova_compute[189608]: 2025-11-24 22:33:35.832 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:33:35 compute-0 nova_compute[189608]: 2025-11-24 22:33:35.975 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.046 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.049 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.131 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.143 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.209 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.211 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.284 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.629 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.631 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5015MB free_disk=72.09738540649414GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.631 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.632 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.732 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.733 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 715e08a7-7174-4e14-a83d-67aab18333d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.734 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.734 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.808 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.823 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.850 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:33:36 compute-0 nova_compute[189608]: 2025-11-24 22:33:36.851 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:37 compute-0 nova_compute[189608]: 2025-11-24 22:33:37.399 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:39 compute-0 nova_compute[189608]: 2025-11-24 22:33:39.171 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:41 compute-0 sshd-session[255200]: Invalid user tunnel from 185.217.1.246 port 33501
Nov 24 22:33:41 compute-0 nova_compute[189608]: 2025-11-24 22:33:41.853 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:33:42 compute-0 nova_compute[189608]: 2025-11-24 22:33:42.408 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:42 compute-0 nova_compute[189608]: 2025-11-24 22:33:42.797 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:33:43 compute-0 sshd-session[255200]: Disconnecting invalid user tunnel 185.217.1.246 port 33501: Change of username or service not allowed: (tunnel,ssh-connection) -> (rahul,ssh-connection) [preauth]
Nov 24 22:33:43 compute-0 podman[255216]: 2025-11-24 22:33:43.545834091 +0000 UTC m=+0.098782641 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:33:43 compute-0 ovn_controller[97889]: 2025-11-24T22:33:43Z|00185|binding|INFO|Releasing lport ce3870c0-48db-470b-8d5d-479134c9b554 from this chassis (sb_readonly=0)
Nov 24 22:33:43 compute-0 nova_compute[189608]: 2025-11-24 22:33:43.692 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:43 compute-0 nova_compute[189608]: 2025-11-24 22:33:43.790 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:33:43 compute-0 nova_compute[189608]: 2025-11-24 22:33:43.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:33:43 compute-0 ovn_controller[97889]: 2025-11-24T22:33:43Z|00186|binding|INFO|Releasing lport ce3870c0-48db-470b-8d5d-479134c9b554 from this chassis (sb_readonly=0)
Nov 24 22:33:43 compute-0 nova_compute[189608]: 2025-11-24 22:33:43.931 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:44 compute-0 nova_compute[189608]: 2025-11-24 22:33:44.174 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:44 compute-0 nova_compute[189608]: 2025-11-24 22:33:44.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:33:45 compute-0 nova_compute[189608]: 2025-11-24 22:33:45.796 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:33:46 compute-0 podman[255241]: 2025-11-24 22:33:46.558376737 +0000 UTC m=+0.107380367 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:33:47 compute-0 nova_compute[189608]: 2025-11-24 22:33:47.346 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764023612.343136, b256c629-ce31-4c1a-a7a6-ed66c07e691a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:33:47 compute-0 nova_compute[189608]: 2025-11-24 22:33:47.346 189613 INFO nova.compute.manager [-] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] VM Stopped (Lifecycle Event)
Nov 24 22:33:47 compute-0 nova_compute[189608]: 2025-11-24 22:33:47.376 189613 DEBUG nova.compute.manager [None req-d24dd9f7-2c5b-4a67-b163-894e23416996 - - - - - -] [instance: b256c629-ce31-4c1a-a7a6-ed66c07e691a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:33:47 compute-0 nova_compute[189608]: 2025-11-24 22:33:47.413 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:48 compute-0 podman[255260]: 2025-11-24 22:33:48.57796319 +0000 UTC m=+0.133354064 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:33:48 compute-0 nova_compute[189608]: 2025-11-24 22:33:48.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:33:48 compute-0 nova_compute[189608]: 2025-11-24 22:33:48.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:33:49 compute-0 nova_compute[189608]: 2025-11-24 22:33:49.177 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:50 compute-0 sshd-session[255239]: Invalid user rahul from 185.217.1.246 port 11701
Nov 24 22:33:50 compute-0 sshd-session[255239]: Disconnecting invalid user rahul 185.217.1.246 port 11701: Change of username or service not allowed: (rahul,ssh-connection) -> (nobody,ssh-connection) [preauth]
Nov 24 22:33:52 compute-0 nova_compute[189608]: 2025-11-24 22:33:52.419 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:54 compute-0 nova_compute[189608]: 2025-11-24 22:33:54.179 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:54.598 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:33:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:54.601 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:33:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:33:54.602 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:33:56 compute-0 sshd-session[255279]: Disconnecting authenticating user nobody 185.217.1.246 port 34874: Change of username or service not allowed: (nobody,ssh-connection) -> (joe,ssh-connection) [preauth]
Nov 24 22:33:57 compute-0 nova_compute[189608]: 2025-11-24 22:33:57.423 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:57 compute-0 podman[255282]: 2025-11-24 22:33:57.560455817 +0000 UTC m=+0.112625551 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible, com.redhat.component=ubi9-container, container_name=kepler)
Nov 24 22:33:57 compute-0 podman[255289]: 2025-11-24 22:33:57.589788958 +0000 UTC m=+0.125525911 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:33:57 compute-0 podman[255283]: 2025-11-24 22:33:57.601822852 +0000 UTC m=+0.141788996 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 24 22:33:58 compute-0 ovn_controller[97889]: 2025-11-24T22:33:58Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:be:93:b1 10.100.0.203
Nov 24 22:33:58 compute-0 ovn_controller[97889]: 2025-11-24T22:33:58Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:93:b1 10.100.0.203
Nov 24 22:33:59 compute-0 nova_compute[189608]: 2025-11-24 22:33:59.182 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:33:59 compute-0 podman[203795]: time="2025-11-24T22:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:33:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:33:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 24 22:34:01 compute-0 openstack_network_exporter[205945]: ERROR   22:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:34:01 compute-0 openstack_network_exporter[205945]: ERROR   22:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:34:01 compute-0 openstack_network_exporter[205945]: ERROR   22:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:34:01 compute-0 openstack_network_exporter[205945]: ERROR   22:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:34:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:34:01 compute-0 openstack_network_exporter[205945]: ERROR   22:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:34:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:34:02 compute-0 nova_compute[189608]: 2025-11-24 22:34:02.432 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:02 compute-0 podman[255347]: 2025-11-24 22:34:02.689138027 +0000 UTC m=+0.096910512 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:34:03 compute-0 podman[255371]: 2025-11-24 22:34:03.556284091 +0000 UTC m=+0.105486289 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 22:34:03 compute-0 podman[255370]: 2025-11-24 22:34:03.5810388 +0000 UTC m=+0.137313978 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 22:34:04 compute-0 nova_compute[189608]: 2025-11-24 22:34:04.189 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:06 compute-0 sshd-session[255345]: Invalid user joe from 185.217.1.246 port 3814
Nov 24 22:34:06 compute-0 sshd-session[255345]: Disconnecting invalid user joe 185.217.1.246 port 3814: Change of username or service not allowed: (joe,ssh-connection) -> (ali,ssh-connection) [preauth]
Nov 24 22:34:07 compute-0 nova_compute[189608]: 2025-11-24 22:34:07.437 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:07 compute-0 sshd-session[255413]: Invalid user sol from 193.32.162.145 port 55484
Nov 24 22:34:07 compute-0 sshd-session[255413]: Connection closed by invalid user sol 193.32.162.145 port 55484 [preauth]
Nov 24 22:34:09 compute-0 nova_compute[189608]: 2025-11-24 22:34:09.193 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:12 compute-0 nova_compute[189608]: 2025-11-24 22:34:12.441 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:14 compute-0 nova_compute[189608]: 2025-11-24 22:34:14.197 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:14 compute-0 podman[255417]: 2025-11-24 22:34:14.504855867 +0000 UTC m=+0.061726539 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:34:15 compute-0 sshd-session[255415]: Invalid user ali from 185.217.1.246 port 41685
Nov 24 22:34:17 compute-0 nova_compute[189608]: 2025-11-24 22:34:17.446 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:17 compute-0 podman[255440]: 2025-11-24 22:34:17.565216289 +0000 UTC m=+0.111392162 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd)
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.633 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.633 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.634 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55a07ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.642 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a3bee9ba-6618-44bd-a443-da9fff6862a9', 'name': 'te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4a6957a775da42c9b535753d6b0279d6', 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'hostId': '81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b', 'status': 'active', 'metadata': {'metering.server_group': 'c6477657-e9b0-476c-83b3-9dc474e946c6'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.646 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 715e08a7-7174-4e14-a83d-67aab18333d8 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 24 22:34:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:17.647 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/715e08a7-7174-4e14-a83d-67aab18333d8 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}64060fdce0de69c84bc4ec1fbe9dbdfbc4dd57ffec34862b3445d9841d3967a2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.342 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Mon, 24 Nov 2025 22:34:17 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-46f35303-924b-4f51-a05d-9b71dc654c01 x-openstack-request-id: req-46f35303-924b-4f51-a05d-9b71dc654c01 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.342 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "715e08a7-7174-4e14-a83d-67aab18333d8", "name": "te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo", "status": "ACTIVE", "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "user_id": "fcf527fb124b42b9ab6a20cc0938b39f", "metadata": {"metering.server_group": "c6477657-e9b0-476c-83b3-9dc474e946c6"}, "hostId": "81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b", "image": {"id": "ea88776c-3c0b-4e74-99b4-08aadc81390f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ea88776c-3c0b-4e74-99b4-08aadc81390f"}]}, "flavor": {"id": "a49f1e6c-1051-4dea-812e-0063121444a0", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a49f1e6c-1051-4dea-812e-0063121444a0"}]}, "created": "2025-11-24T22:33:18Z", "updated": "2025-11-24T22:33:24Z", "addresses": {"": [{"version": 4, "addr": "10.100.0.203", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:be:93:b1"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/715e08a7-7174-4e14-a83d-67aab18333d8"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/715e08a7-7174-4e14-a83d-67aab18333d8"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-24T22:33:24.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.342 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/715e08a7-7174-4e14-a83d-67aab18333d8 used request id req-46f35303-924b-4f51-a05d-9b71dc654c01 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.343 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '715e08a7-7174-4e14-a83d-67aab18333d8', 'name': 'te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4a6957a775da42c9b535753d6b0279d6', 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'hostId': '81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b', 'status': 'active', 'metadata': {'metering.server_group': 'c6477657-e9b0-476c-83b3-9dc474e946c6'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.344 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.344 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:34:18.344541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.350 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 sshd-session[255415]: Disconnecting invalid user ali 185.217.1.246 port 41685: Change of username or service not allowed: (ali,ssh-connection) -> (user3,ssh-connection) [preauth]
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.360 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 715e08a7-7174-4e14-a83d-67aab18333d8 / tap9d8978ca-0c inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.361 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.362 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.362 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.362 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.363 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.363 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.364 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:34:18.362793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:34:18.364491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.384 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.384 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.405 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.405 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.406 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.406 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:34:18.406936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:34:18.453 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:34:18 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:34:18.455 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:34:18 compute-0 nova_compute[189608]: 2025-11-24 22:34:18.454 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.458 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 28965888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.459 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.493 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.493 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.494 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.494 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.494 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.494 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.494 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.494 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.494 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 1026400534 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.495 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 91052482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.495 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.latency volume: 1116528238 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.495 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.latency volume: 67346772 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.496 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.496 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:34:18.494784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:34:18.497405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.525 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/cpu volume: 231550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.547 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/cpu volume: 52330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.548 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.549 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.549 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.549 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.550 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 1041 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.550 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.551 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.551 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.553 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.553 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.553 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.554 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:34:18.549842) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:34:18.553616) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.555 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.555 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.556 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.556 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.556 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.556 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.557 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 72900608 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.557 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:34:18.557048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.558 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.558 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.bytes volume: 72814592 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.559 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.559 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.560 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.560 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.560 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.560 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.561 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 4091704388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.561 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.562 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.latency volume: 2373446035 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.562 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.563 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.563 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.564 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.564 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:34:18.560852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.564 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.564 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 298 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.564 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:34:18.564426) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.565 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.requests volume: 308 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.565 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.566 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.567 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.567 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.568 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.569 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.569 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.570 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.570 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:34:18.566991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.570 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:34:18.568896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.571 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.572 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.572 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo>]
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.572 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:34:18.570892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.573 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.573 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.573 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.574 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.574 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.574 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.575 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-24T22:34:18.572113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.576 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:34:18.573200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:34:18.574422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.576 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:34:18.576117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.577 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.578 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.578 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.579 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.580 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.580 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.581 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.582 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.582 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.582 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.583 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:34:18.578111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.584 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/memory.usage volume: 43.3046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:34:18.580158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.584 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/memory.usage volume: 43.515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:34:18.582145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.586 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.586 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo>]
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:34:18.584135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-24T22:34:18.585952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.587 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.587 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:34:18.587735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.588 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.590 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.590 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.591 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.591 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.592 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.593 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.593 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:34:18.590084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:34:18.593176) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:18 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:34:18.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:34:19 compute-0 nova_compute[189608]: 2025-11-24 22:34:19.201 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:19 compute-0 podman[255461]: 2025-11-24 22:34:19.557744952 +0000 UTC m=+0.116252213 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true)
Nov 24 22:34:22 compute-0 nova_compute[189608]: 2025-11-24 22:34:22.451 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:22 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:34:22.457 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:34:24 compute-0 nova_compute[189608]: 2025-11-24 22:34:24.204 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:24 compute-0 sshd-session[255482]: Invalid user user3 from 185.217.1.246 port 20822
Nov 24 22:34:24 compute-0 sshd-session[255482]: Disconnecting invalid user user3 185.217.1.246 port 20822: Change of username or service not allowed: (user3,ssh-connection) -> (vali,ssh-connection) [preauth]
Nov 24 22:34:27 compute-0 nova_compute[189608]: 2025-11-24 22:34:27.455 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:28 compute-0 podman[255486]: 2025-11-24 22:34:28.531041252 +0000 UTC m=+0.085079934 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, version=9.4, container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 24 22:34:28 compute-0 podman[255487]: 2025-11-24 22:34:28.555219474 +0000 UTC m=+0.087492390 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Nov 24 22:34:28 compute-0 podman[255493]: 2025-11-24 22:34:28.575759742 +0000 UTC m=+0.090452411 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 24 22:34:29 compute-0 sshd-session[255484]: Invalid user vali from 185.217.1.246 port 37923
Nov 24 22:34:29 compute-0 nova_compute[189608]: 2025-11-24 22:34:29.207 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:29 compute-0 sshd-session[255484]: Disconnecting invalid user vali 185.217.1.246 port 37923: Change of username or service not allowed: (vali,ssh-connection) -> (sshd,ssh-connection) [preauth]
Nov 24 22:34:29 compute-0 podman[203795]: time="2025-11-24T22:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:34:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:34:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 24 22:34:31 compute-0 openstack_network_exporter[205945]: ERROR   22:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:34:31 compute-0 openstack_network_exporter[205945]: ERROR   22:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:34:31 compute-0 openstack_network_exporter[205945]: ERROR   22:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:34:31 compute-0 openstack_network_exporter[205945]: ERROR   22:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:34:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:34:31 compute-0 openstack_network_exporter[205945]: ERROR   22:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:34:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:34:32 compute-0 nova_compute[189608]: 2025-11-24 22:34:32.458 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:33 compute-0 podman[255542]: 2025-11-24 22:34:33.559281021 +0000 UTC m=+0.116421048 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:34:33 compute-0 podman[255564]: 2025-11-24 22:34:33.745438726 +0000 UTC m=+0.131690313 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:34:33 compute-0 podman[255583]: 2025-11-24 22:34:33.900123292 +0000 UTC m=+0.155478012 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:34:34 compute-0 nova_compute[189608]: 2025-11-24 22:34:34.211 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:34 compute-0 nova_compute[189608]: 2025-11-24 22:34:34.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:34:34 compute-0 nova_compute[189608]: 2025-11-24 22:34:34.796 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:34:34 compute-0 nova_compute[189608]: 2025-11-24 22:34:34.797 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:34:35 compute-0 sshd-session[255610]: Invalid user sol from 45.148.10.240 port 57680
Nov 24 22:34:35 compute-0 sshd-session[255610]: Connection closed by invalid user sol 45.148.10.240 port 57680 [preauth]
Nov 24 22:34:36 compute-0 nova_compute[189608]: 2025-11-24 22:34:36.140 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:34:36 compute-0 nova_compute[189608]: 2025-11-24 22:34:36.141 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:34:36 compute-0 nova_compute[189608]: 2025-11-24 22:34:36.141 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:34:36 compute-0 nova_compute[189608]: 2025-11-24 22:34:36.142 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid a3bee9ba-6618-44bd-a443-da9fff6862a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:34:37 compute-0 nova_compute[189608]: 2025-11-24 22:34:37.464 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:38 compute-0 sshd-session[255541]: Disconnecting authenticating user sshd 185.217.1.246 port 59246: Change of username or service not allowed: (sshd,ssh-connection) -> (itadmin,ssh-connection) [preauth]
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.216 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.332 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updating instance_info_cache with network_info: [{"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.353 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.354 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.356 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.389 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.392 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.394 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.395 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.512 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.610 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.612 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.710 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.721 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.786 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.793 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:34:39 compute-0 nova_compute[189608]: 2025-11-24 22:34:39.889 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:34:40 compute-0 nova_compute[189608]: 2025-11-24 22:34:40.243 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:34:40 compute-0 nova_compute[189608]: 2025-11-24 22:34:40.245 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4931MB free_disk=72.06910705566406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:34:40 compute-0 nova_compute[189608]: 2025-11-24 22:34:40.245 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:34:40 compute-0 nova_compute[189608]: 2025-11-24 22:34:40.246 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:34:40 compute-0 nova_compute[189608]: 2025-11-24 22:34:40.339 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:34:40 compute-0 nova_compute[189608]: 2025-11-24 22:34:40.340 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 715e08a7-7174-4e14-a83d-67aab18333d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:34:40 compute-0 nova_compute[189608]: 2025-11-24 22:34:40.340 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:34:40 compute-0 nova_compute[189608]: 2025-11-24 22:34:40.341 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:34:40 compute-0 nova_compute[189608]: 2025-11-24 22:34:40.406 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:34:40 compute-0 nova_compute[189608]: 2025-11-24 22:34:40.426 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:34:40 compute-0 nova_compute[189608]: 2025-11-24 22:34:40.427 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:34:40 compute-0 nova_compute[189608]: 2025-11-24 22:34:40.428 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:34:42 compute-0 nova_compute[189608]: 2025-11-24 22:34:42.468 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:42 compute-0 nova_compute[189608]: 2025-11-24 22:34:42.865 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:34:43 compute-0 sshd-session[255624]: Invalid user itadmin from 185.217.1.246 port 31650
Nov 24 22:34:43 compute-0 sshd-session[255624]: Disconnecting invalid user itadmin 185.217.1.246 port 31650: Change of username or service not allowed: (itadmin,ssh-connection) -> (sapadm,ssh-connection) [preauth]
Nov 24 22:34:43 compute-0 nova_compute[189608]: 2025-11-24 22:34:43.790 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:34:43 compute-0 nova_compute[189608]: 2025-11-24 22:34:43.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:34:44 compute-0 nova_compute[189608]: 2025-11-24 22:34:44.219 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:44 compute-0 podman[255626]: 2025-11-24 22:34:44.763707618 +0000 UTC m=+0.075581660 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:34:45 compute-0 nova_compute[189608]: 2025-11-24 22:34:45.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:34:45 compute-0 nova_compute[189608]: 2025-11-24 22:34:45.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:34:46 compute-0 nova_compute[189608]: 2025-11-24 22:34:46.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:34:47 compute-0 nova_compute[189608]: 2025-11-24 22:34:47.473 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:48 compute-0 podman[255650]: 2025-11-24 22:34:48.598043199 +0000 UTC m=+0.136152121 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 22:34:49 compute-0 nova_compute[189608]: 2025-11-24 22:34:49.227 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:49 compute-0 nova_compute[189608]: 2025-11-24 22:34:49.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:34:49 compute-0 nova_compute[189608]: 2025-11-24 22:34:49.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:34:50 compute-0 sshd-session[255648]: Invalid user sapadm from 185.217.1.246 port 51440
Nov 24 22:34:50 compute-0 podman[255671]: 2025-11-24 22:34:50.451667645 +0000 UTC m=+0.116110258 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 22:34:51 compute-0 sshd-session[255648]: Disconnecting invalid user sapadm 185.217.1.246 port 51440: Change of username or service not allowed: (sapadm,ssh-connection) -> (openmediavault,ssh-connection [preauth]
Nov 24 22:34:52 compute-0 nova_compute[189608]: 2025-11-24 22:34:52.479 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:53 compute-0 ovn_controller[97889]: 2025-11-24T22:34:53Z|00187|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 24 22:34:54 compute-0 nova_compute[189608]: 2025-11-24 22:34:54.230 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:34:54.599 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:34:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:34:54.600 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:34:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:34:54.601 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:34:57 compute-0 sshd-session[255691]: Invalid user openmediavault from 185.217.1.246 port 14540
Nov 24 22:34:57 compute-0 nova_compute[189608]: 2025-11-24 22:34:57.483 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:58 compute-0 sshd-session[255691]: Disconnecting invalid user openmediavault 185.217.1.246 port 14540: Change of username or service not allowed: (openmediavault,ssh-connection) -> (xiaoxiao,ssh-connecti [preauth]
Nov 24 22:34:59 compute-0 nova_compute[189608]: 2025-11-24 22:34:59.233 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:34:59 compute-0 podman[255693]: 2025-11-24 22:34:59.565291937 +0000 UTC m=+0.117263775 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 24 22:34:59 compute-0 podman[255694]: 2025-11-24 22:34:59.582038387 +0000 UTC m=+0.133434917 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.openshift.tags=minimal rhel9, architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm)
Nov 24 22:34:59 compute-0 podman[255695]: 2025-11-24 22:34:59.598639693 +0000 UTC m=+0.143454219 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true)
Nov 24 22:34:59 compute-0 podman[203795]: time="2025-11-24T22:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:34:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:34:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 24 22:35:01 compute-0 openstack_network_exporter[205945]: ERROR   22:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:35:01 compute-0 openstack_network_exporter[205945]: ERROR   22:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:35:01 compute-0 openstack_network_exporter[205945]: ERROR   22:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:35:01 compute-0 openstack_network_exporter[205945]: ERROR   22:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:35:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:35:01 compute-0 openstack_network_exporter[205945]: ERROR   22:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:35:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.488 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.793 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.799 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.802 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.803 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.803 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.804 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.823 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.836 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.837 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Image id ea88776c-3c0b-4e74-99b4-08aadc81390f yields fingerprint e3114b07aff678ef05dd12aafd3a42953942e41b _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.837 189613 INFO nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] image ea88776c-3c0b-4e74-99b4-08aadc81390f at (/var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b): checking
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.838 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] image ea88776c-3c0b-4e74-99b4-08aadc81390f at (/var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.840 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.841 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] a3bee9ba-6618-44bd-a443-da9fff6862a9 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.841 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] a3bee9ba-6618-44bd-a443-da9fff6862a9 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.842 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.941 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.942 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 is backed by e3114b07aff678ef05dd12aafd3a42953942e41b _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.942 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] 715e08a7-7174-4e14-a83d-67aab18333d8 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.942 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] 715e08a7-7174-4e14-a83d-67aab18333d8 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Nov 24 22:35:02 compute-0 nova_compute[189608]: 2025-11-24 22:35:02.943 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.005 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.006 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 715e08a7-7174-4e14-a83d-67aab18333d8 is backed by e3114b07aff678ef05dd12aafd3a42953942e41b _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.006 189613 WARNING nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Unknown base file: /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.006 189613 WARNING nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Unknown base file: /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.007 189613 WARNING nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.007 189613 INFO nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Active base files: /var/lib/nova/instances/_base/e3114b07aff678ef05dd12aafd3a42953942e41b
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.007 189613 INFO nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Removable base files: /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.008 189613 INFO nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/bc2e84058646d2c6ba728b20ebecd0301036e9ed
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.008 189613 INFO nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/0c781545786ba0e2b6d5b36227eef817e147d42c
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.009 189613 INFO nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a7fd44d1fb7e601cde0700b9c4e906d1bdcf220e
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.009 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.009 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.010 189613 DEBUG nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Nov 24 22:35:03 compute-0 nova_compute[189608]: 2025-11-24 22:35:03.010 189613 INFO nova.virt.libvirt.imagecache [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Nov 24 22:35:03 compute-0 sshd-session[255756]: Invalid user xiaoxiao from 185.217.1.246 port 45157
Nov 24 22:35:03 compute-0 podman[255765]: 2025-11-24 22:35:03.80939058 +0000 UTC m=+0.125441698 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:35:03 compute-0 podman[255787]: 2025-11-24 22:35:03.919093579 +0000 UTC m=+0.110416782 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 22:35:04 compute-0 podman[255806]: 2025-11-24 22:35:04.126608747 +0000 UTC m=+0.159764895 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:35:04 compute-0 nova_compute[189608]: 2025-11-24 22:35:04.235 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:04 compute-0 sshd-session[255756]: Disconnecting invalid user xiaoxiao 185.217.1.246 port 45157: Change of username or service not allowed: (xiaoxiao,ssh-connection) -> (dock,ssh-connection) [preauth]
Nov 24 22:35:07 compute-0 nova_compute[189608]: 2025-11-24 22:35:07.493 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:09 compute-0 nova_compute[189608]: 2025-11-24 22:35:09.239 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:12 compute-0 nova_compute[189608]: 2025-11-24 22:35:12.501 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:13 compute-0 sshd-session[255832]: Invalid user dock from 185.217.1.246 port 11115
Nov 24 22:35:14 compute-0 nova_compute[189608]: 2025-11-24 22:35:14.244 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:14 compute-0 sshd-session[255832]: Disconnecting invalid user dock 185.217.1.246 port 11115: Change of username or service not allowed: (dock,ssh-connection) -> (oper,ssh-connection) [preauth]
Nov 24 22:35:15 compute-0 podman[255835]: 2025-11-24 22:35:15.594518538 +0000 UTC m=+0.142684984 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:35:17 compute-0 nova_compute[189608]: 2025-11-24 22:35:17.518 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:19 compute-0 nova_compute[189608]: 2025-11-24 22:35:19.247 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:19 compute-0 podman[255861]: 2025-11-24 22:35:19.612007721 +0000 UTC m=+0.147429202 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:35:21 compute-0 sshd-session[255859]: Invalid user oper from 185.217.1.246 port 45762
Nov 24 22:35:21 compute-0 podman[255882]: 2025-11-24 22:35:21.569274748 +0000 UTC m=+0.125219392 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:35:22 compute-0 sshd-session[255859]: Disconnecting invalid user oper 185.217.1.246 port 45762: Change of username or service not allowed: (oper,ssh-connection) -> (nagios,ssh-connection) [preauth]
Nov 24 22:35:22 compute-0 nova_compute[189608]: 2025-11-24 22:35:22.530 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:24 compute-0 nova_compute[189608]: 2025-11-24 22:35:24.249 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:25 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 24 22:35:27 compute-0 nova_compute[189608]: 2025-11-24 22:35:27.536 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:27 compute-0 sshd-session[255902]: Invalid user nagios from 185.217.1.246 port 3485
Nov 24 22:35:29 compute-0 nova_compute[189608]: 2025-11-24 22:35:29.251 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:29 compute-0 podman[203795]: time="2025-11-24T22:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:35:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:35:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Nov 24 22:35:29 compute-0 sshd-session[255902]: Disconnecting invalid user nagios 185.217.1.246 port 3485: Change of username or service not allowed: (nagios,ssh-connection) -> (nc,ssh-connection) [preauth]
Nov 24 22:35:30 compute-0 podman[255905]: 2025-11-24 22:35:30.53394321 +0000 UTC m=+0.087947374 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=edpm)
Nov 24 22:35:30 compute-0 podman[255907]: 2025-11-24 22:35:30.534499838 +0000 UTC m=+0.082263688 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 22:35:30 compute-0 podman[255906]: 2025-11-24 22:35:30.55293274 +0000 UTC m=+0.107345266 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, distribution-scope=public, io.openshift.expose-services=)
Nov 24 22:35:31 compute-0 openstack_network_exporter[205945]: ERROR   22:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:35:31 compute-0 openstack_network_exporter[205945]: ERROR   22:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:35:31 compute-0 openstack_network_exporter[205945]: ERROR   22:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:35:31 compute-0 openstack_network_exporter[205945]: ERROR   22:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:35:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:35:31 compute-0 openstack_network_exporter[205945]: ERROR   22:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:35:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:35:32 compute-0 nova_compute[189608]: 2025-11-24 22:35:32.005 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:32 compute-0 nova_compute[189608]: 2025-11-24 22:35:32.540 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:32 compute-0 nova_compute[189608]: 2025-11-24 22:35:32.720 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:32 compute-0 nova_compute[189608]: 2025-11-24 22:35:32.742 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Triggering sync for uuid a3bee9ba-6618-44bd-a443-da9fff6862a9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 24 22:35:32 compute-0 nova_compute[189608]: 2025-11-24 22:35:32.743 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Triggering sync for uuid 715e08a7-7174-4e14-a83d-67aab18333d8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 24 22:35:32 compute-0 nova_compute[189608]: 2025-11-24 22:35:32.743 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "a3bee9ba-6618-44bd-a443-da9fff6862a9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:35:32 compute-0 nova_compute[189608]: 2025-11-24 22:35:32.744 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:35:32 compute-0 nova_compute[189608]: 2025-11-24 22:35:32.744 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "715e08a7-7174-4e14-a83d-67aab18333d8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:35:32 compute-0 nova_compute[189608]: 2025-11-24 22:35:32.745 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "715e08a7-7174-4e14-a83d-67aab18333d8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:35:32 compute-0 nova_compute[189608]: 2025-11-24 22:35:32.776 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:35:32 compute-0 nova_compute[189608]: 2025-11-24 22:35:32.777 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "715e08a7-7174-4e14-a83d-67aab18333d8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:35:34 compute-0 nova_compute[189608]: 2025-11-24 22:35:34.253 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:34 compute-0 podman[255959]: 2025-11-24 22:35:34.548192762 +0000 UTC m=+0.094995823 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:35:34 compute-0 podman[255961]: 2025-11-24 22:35:34.579496435 +0000 UTC m=+0.115508470 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 24 22:35:34 compute-0 podman[255960]: 2025-11-24 22:35:34.600770946 +0000 UTC m=+0.134617244 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 22:35:35 compute-0 sshd-session[255957]: Invalid user nc from 185.217.1.246 port 35245
Nov 24 22:35:35 compute-0 nova_compute[189608]: 2025-11-24 22:35:35.818 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:35 compute-0 nova_compute[189608]: 2025-11-24 22:35:35.819 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:35:36 compute-0 nova_compute[189608]: 2025-11-24 22:35:36.152 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:35:36 compute-0 nova_compute[189608]: 2025-11-24 22:35:36.153 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:35:36 compute-0 nova_compute[189608]: 2025-11-24 22:35:36.154 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.179 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Updating instance_info_cache with network_info: [{"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.202 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.202 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.203 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.224 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.225 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.225 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.226 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.307 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.367 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.369 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.438 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.447 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.513 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.514 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.546 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.580 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.948 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.950 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4925MB free_disk=72.0693244934082GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.950 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:35:37 compute-0 nova_compute[189608]: 2025-11-24 22:35:37.951 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:35:38 compute-0 nova_compute[189608]: 2025-11-24 22:35:38.161 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:35:38 compute-0 nova_compute[189608]: 2025-11-24 22:35:38.162 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 715e08a7-7174-4e14-a83d-67aab18333d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:35:38 compute-0 nova_compute[189608]: 2025-11-24 22:35:38.162 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:35:38 compute-0 nova_compute[189608]: 2025-11-24 22:35:38.163 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:35:38 compute-0 nova_compute[189608]: 2025-11-24 22:35:38.239 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing inventories for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 22:35:38 compute-0 nova_compute[189608]: 2025-11-24 22:35:38.305 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating ProviderTree inventory for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 22:35:38 compute-0 nova_compute[189608]: 2025-11-24 22:35:38.306 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating inventory in ProviderTree for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 22:35:38 compute-0 nova_compute[189608]: 2025-11-24 22:35:38.322 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing aggregate associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 22:35:38 compute-0 nova_compute[189608]: 2025-11-24 22:35:38.352 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing trait associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, traits: HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 22:35:38 compute-0 sshd-session[255957]: Disconnecting invalid user nc 185.217.1.246 port 35245: Change of username or service not allowed: (nc,ssh-connection) -> (ADMIN,ssh-connection) [preauth]
Nov 24 22:35:38 compute-0 nova_compute[189608]: 2025-11-24 22:35:38.417 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:35:38 compute-0 nova_compute[189608]: 2025-11-24 22:35:38.428 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:35:38 compute-0 nova_compute[189608]: 2025-11-24 22:35:38.429 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:35:38 compute-0 nova_compute[189608]: 2025-11-24 22:35:38.430 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:35:39 compute-0 nova_compute[189608]: 2025-11-24 22:35:39.255 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:42 compute-0 nova_compute[189608]: 2025-11-24 22:35:42.555 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:44 compute-0 nova_compute[189608]: 2025-11-24 22:35:44.258 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:45 compute-0 nova_compute[189608]: 2025-11-24 22:35:45.020 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:45 compute-0 nova_compute[189608]: 2025-11-24 22:35:45.021 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:45 compute-0 nova_compute[189608]: 2025-11-24 22:35:45.021 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:46 compute-0 podman[256035]: 2025-11-24 22:35:46.535129162 +0000 UTC m=+0.087892762 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:35:46 compute-0 sshd-session[256033]: Invalid user ADMIN from 185.217.1.246 port 64488
Nov 24 22:35:46 compute-0 nova_compute[189608]: 2025-11-24 22:35:46.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:46 compute-0 nova_compute[189608]: 2025-11-24 22:35:46.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:47 compute-0 nova_compute[189608]: 2025-11-24 22:35:47.562 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:47 compute-0 nova_compute[189608]: 2025-11-24 22:35:47.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:49 compute-0 nova_compute[189608]: 2025-11-24 22:35:49.261 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:49 compute-0 nova_compute[189608]: 2025-11-24 22:35:49.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:49 compute-0 nova_compute[189608]: 2025-11-24 22:35:49.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:35:49 compute-0 nova_compute[189608]: 2025-11-24 22:35:49.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:49 compute-0 nova_compute[189608]: 2025-11-24 22:35:49.796 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 22:35:49 compute-0 sshd-session[256033]: Disconnecting invalid user ADMIN 185.217.1.246 port 64488: Change of username or service not allowed: (ADMIN,ssh-connection) -> (mary,ssh-connection) [preauth]
Nov 24 22:35:50 compute-0 podman[256061]: 2025-11-24 22:35:50.518070821 +0000 UTC m=+0.076668863 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 22:35:52 compute-0 podman[256083]: 2025-11-24 22:35:52.557144549 +0000 UTC m=+0.102155354 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true)
Nov 24 22:35:52 compute-0 nova_compute[189608]: 2025-11-24 22:35:52.567 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:54 compute-0 nova_compute[189608]: 2025-11-24 22:35:54.264 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:35:54.601 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:35:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:35:54.603 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:35:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:35:54.604 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:35:54 compute-0 nova_compute[189608]: 2025-11-24 22:35:54.806 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:35:54 compute-0 nova_compute[189608]: 2025-11-24 22:35:54.806 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 22:35:54 compute-0 nova_compute[189608]: 2025-11-24 22:35:54.830 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 22:35:55 compute-0 sshd-session[256082]: Invalid user mary from 185.217.1.246 port 42961
Nov 24 22:35:57 compute-0 sshd-session[256082]: Disconnecting invalid user mary 185.217.1.246 port 42961: Change of username or service not allowed: (mary,ssh-connection) -> (monitor,ssh-connection) [preauth]
Nov 24 22:35:57 compute-0 nova_compute[189608]: 2025-11-24 22:35:57.572 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:59 compute-0 nova_compute[189608]: 2025-11-24 22:35:59.267 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:35:59 compute-0 podman[203795]: time="2025-11-24T22:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:35:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:35:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 24 22:36:01 compute-0 openstack_network_exporter[205945]: ERROR   22:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:36:01 compute-0 openstack_network_exporter[205945]: ERROR   22:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:36:01 compute-0 openstack_network_exporter[205945]: ERROR   22:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:36:01 compute-0 openstack_network_exporter[205945]: ERROR   22:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:36:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:36:01 compute-0 openstack_network_exporter[205945]: ERROR   22:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:36:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:36:01 compute-0 podman[256104]: 2025-11-24 22:36:01.801221944 +0000 UTC m=+0.358260493 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, name=ubi9, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 24 22:36:01 compute-0 podman[256111]: 2025-11-24 22:36:01.801966027 +0000 UTC m=+0.343102591 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 22:36:01 compute-0 podman[256105]: 2025-11-24 22:36:01.808879392 +0000 UTC m=+0.353444973 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, name=ubi9-minimal, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc.)
Nov 24 22:36:02 compute-0 nova_compute[189608]: 2025-11-24 22:36:02.576 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:04 compute-0 nova_compute[189608]: 2025-11-24 22:36:04.276 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:04 compute-0 nova_compute[189608]: 2025-11-24 22:36:04.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:36:05 compute-0 sshd-session[256159]: Invalid user monitor from 185.217.1.246 port 18541
Nov 24 22:36:05 compute-0 podman[256163]: 2025-11-24 22:36:05.410380989 +0000 UTC m=+0.107273984 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 24 22:36:05 compute-0 podman[256161]: 2025-11-24 22:36:05.417788859 +0000 UTC m=+0.121715333 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:36:05 compute-0 podman[256162]: 2025-11-24 22:36:05.456372348 +0000 UTC m=+0.148162734 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:36:06 compute-0 sshd-session[256159]: Disconnecting invalid user monitor 185.217.1.246 port 18541: Change of username or service not allowed: (monitor,ssh-connection) -> (orangepi,ssh-connection) [preauth]
Nov 24 22:36:07 compute-0 nova_compute[189608]: 2025-11-24 22:36:07.583 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:09 compute-0 nova_compute[189608]: 2025-11-24 22:36:09.284 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:12 compute-0 nova_compute[189608]: 2025-11-24 22:36:12.588 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:14 compute-0 nova_compute[189608]: 2025-11-24 22:36:14.282 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:14 compute-0 sshd-session[256230]: Invalid user orangepi from 185.217.1.246 port 51476
Nov 24 22:36:16 compute-0 sshd-session[256230]: Disconnecting invalid user orangepi 185.217.1.246 port 51476: Change of username or service not allowed: (orangepi,ssh-connection) -> (devops,ssh-connection) [preauth]
Nov 24 22:36:17 compute-0 podman[256233]: 2025-11-24 22:36:17.523827718 +0000 UTC m=+0.079418278 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:36:17 compute-0 nova_compute[189608]: 2025-11-24 22:36:17.594 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.634 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.635 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.636 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c55986840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.644 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a3bee9ba-6618-44bd-a443-da9fff6862a9', 'name': 'te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4a6957a775da42c9b535753d6b0279d6', 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'hostId': '81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b', 'status': 'active', 'metadata': {'metering.server_group': 'c6477657-e9b0-476c-83b3-9dc474e946c6'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.647 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '715e08a7-7174-4e14-a83d-67aab18333d8', 'name': 'te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4a6957a775da42c9b535753d6b0279d6', 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'hostId': '81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b', 'status': 'active', 'metadata': {'metering.server_group': 'c6477657-e9b0-476c-83b3-9dc474e946c6'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.647 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:36:17.647569) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.651 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.654 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.655 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.655 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.655 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.655 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.655 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.655 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.655 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.655 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.656 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.656 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:36:17.655489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:36:17.656798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.679 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.679 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.697 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.697 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.698 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.698 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.698 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.698 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.698 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.699 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:36:17.698782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.758 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 30194176 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.758 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.808 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.809 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.814 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.815 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.815 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.815 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.815 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.815 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 1080868082 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.816 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 103540431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.817 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.latency volume: 1116528238 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.818 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.latency volume: 67346772 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.819 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.819 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.819 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.819 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:36:17.815667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:36:17.820027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.862 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/cpu volume: 333330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.902 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/cpu volume: 171320000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.903 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.904 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.904 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.905 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.905 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.905 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 1088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.906 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.906 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.907 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.908 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.908 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.909 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.909 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.909 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.910 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.911 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:36:17.905231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.913 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 73003008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.913 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:36:17.908817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.913 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.bytes volume: 72876032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:36:17.912745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.914 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.915 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.915 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.915 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.916 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.916 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 4097372034 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.917 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.917 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.latency volume: 2401662271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.918 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.920 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.920 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.920 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.920 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 302 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.921 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.922 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.requests volume: 319 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:36:17.916544) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:36:17.920758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.922 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.923 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.924 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.924 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.924 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.924 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.925 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.926 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:36:17.925155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.926 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.926 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.927 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.927 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.928 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.928 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.928 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.928 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.928 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.929 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.930 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.930 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.931 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.931 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.931 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.932 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.932 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.932 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.933 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.933 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.933 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.933 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.933 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.934 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.934 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.935 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.935 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.935 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.936 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:36:17.928720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.936 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:36:17.931468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.936 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.936 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:36:17.933818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.936 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.936 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:36:17.935709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.937 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.937 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.938 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.938 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.938 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.938 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.938 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.939 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.940 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.940 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.941 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.941 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.941 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.942 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.943 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.944 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:36:17.938654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.944 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:36:17.941288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.944 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.944 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.944 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.944 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.945 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.946 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.946 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.947 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.947 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.948 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.948 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.948 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.948 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.949 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.950 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.950 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:36:17.944880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.950 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.951 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:36:17.948171) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.951 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.951 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.951 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/memory.usage volume: 42.8359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.951 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/memory.usage volume: 43.50390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.951 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.952 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.952 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.952 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.952 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.952 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.953 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.953 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.953 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.953 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:36:17.951118) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.954 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:36:17.952870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.954 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.954 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.954 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.954 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.955 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.955 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.956 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.956 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.956 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.956 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.956 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.956 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.957 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.957 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.960 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:36:17.954503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:36:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:36:17.960 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:36:17.956761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:36:19 compute-0 nova_compute[189608]: 2025-11-24 22:36:19.289 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:21 compute-0 podman[256262]: 2025-11-24 22:36:21.581118898 +0000 UTC m=+0.132955892 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 22:36:22 compute-0 sshd-session[256232]: Invalid user devops from 185.217.1.246 port 12974
Nov 24 22:36:22 compute-0 nova_compute[189608]: 2025-11-24 22:36:22.601 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:23 compute-0 sshd-session[256232]: Disconnecting invalid user devops 185.217.1.246 port 12974: Change of username or service not allowed: (devops,ssh-connection) -> (demo,ssh-connection) [preauth]
Nov 24 22:36:23 compute-0 podman[256280]: 2025-11-24 22:36:23.542601085 +0000 UTC m=+0.105955433 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 22:36:24 compute-0 nova_compute[189608]: 2025-11-24 22:36:24.291 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:27 compute-0 nova_compute[189608]: 2025-11-24 22:36:27.608 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:27 compute-0 sshd-session[256300]: Invalid user demo from 185.217.1.246 port 46515
Nov 24 22:36:29 compute-0 nova_compute[189608]: 2025-11-24 22:36:29.294 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:29 compute-0 sshd-session[256302]: Invalid user funded from 45.148.10.240 port 58310
Nov 24 22:36:29 compute-0 sshd-session[256302]: Connection closed by invalid user funded 45.148.10.240 port 58310 [preauth]
Nov 24 22:36:29 compute-0 podman[203795]: time="2025-11-24T22:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:36:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:36:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 24 22:36:31 compute-0 openstack_network_exporter[205945]: ERROR   22:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:36:31 compute-0 openstack_network_exporter[205945]: ERROR   22:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:36:31 compute-0 openstack_network_exporter[205945]: ERROR   22:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:36:31 compute-0 openstack_network_exporter[205945]: ERROR   22:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:36:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:36:31 compute-0 openstack_network_exporter[205945]: ERROR   22:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:36:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:36:31 compute-0 sshd-session[256300]: Disconnecting invalid user demo 185.217.1.246 port 46515: Change of username or service not allowed: (demo,ssh-connection) -> (liuj,ssh-connection) [preauth]
Nov 24 22:36:32 compute-0 podman[256305]: 2025-11-24 22:36:32.578465561 +0000 UTC m=+0.118909817 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, release=1755695350, distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Nov 24 22:36:32 compute-0 podman[256306]: 2025-11-24 22:36:32.585049025 +0000 UTC m=+0.119557916 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 22:36:32 compute-0 podman[256304]: 2025-11-24 22:36:32.599622638 +0000 UTC m=+0.142097696 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, distribution-scope=public, io.buildah.version=1.29.0, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Nov 24 22:36:32 compute-0 nova_compute[189608]: 2025-11-24 22:36:32.615 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:34 compute-0 nova_compute[189608]: 2025-11-24 22:36:34.302 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:35 compute-0 sshd-session[256355]: Invalid user matrix from 185.156.73.233 port 54304
Nov 24 22:36:35 compute-0 sshd-session[256355]: Connection closed by invalid user matrix 185.156.73.233 port 54304 [preauth]
Nov 24 22:36:35 compute-0 nova_compute[189608]: 2025-11-24 22:36:35.804 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:36:35 compute-0 nova_compute[189608]: 2025-11-24 22:36:35.806 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:36:35 compute-0 nova_compute[189608]: 2025-11-24 22:36:35.806 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:36:36 compute-0 nova_compute[189608]: 2025-11-24 22:36:36.155 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:36:36 compute-0 nova_compute[189608]: 2025-11-24 22:36:36.156 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:36:36 compute-0 nova_compute[189608]: 2025-11-24 22:36:36.157 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:36:36 compute-0 nova_compute[189608]: 2025-11-24 22:36:36.157 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid a3bee9ba-6618-44bd-a443-da9fff6862a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:36:36 compute-0 podman[256374]: 2025-11-24 22:36:36.500755945 +0000 UTC m=+0.063218506 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:36:36 compute-0 podman[256376]: 2025-11-24 22:36:36.525579706 +0000 UTC m=+0.077648883 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 24 22:36:36 compute-0 podman[256375]: 2025-11-24 22:36:36.556130336 +0000 UTC m=+0.111043131 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Nov 24 22:36:37 compute-0 nova_compute[189608]: 2025-11-24 22:36:37.197 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updating instance_info_cache with network_info: [{"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:36:37 compute-0 nova_compute[189608]: 2025-11-24 22:36:37.211 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:36:37 compute-0 nova_compute[189608]: 2025-11-24 22:36:37.212 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:36:37 compute-0 nova_compute[189608]: 2025-11-24 22:36:37.617 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:37 compute-0 nova_compute[189608]: 2025-11-24 22:36:37.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:36:37 compute-0 nova_compute[189608]: 2025-11-24 22:36:37.815 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:36:37 compute-0 nova_compute[189608]: 2025-11-24 22:36:37.815 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:36:37 compute-0 nova_compute[189608]: 2025-11-24 22:36:37.816 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:36:37 compute-0 nova_compute[189608]: 2025-11-24 22:36:37.816 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:36:37 compute-0 nova_compute[189608]: 2025-11-24 22:36:37.885 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:36:37 compute-0 nova_compute[189608]: 2025-11-24 22:36:37.956 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:36:37 compute-0 nova_compute[189608]: 2025-11-24 22:36:37.957 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.015 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.021 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.080 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.082 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.142 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.440 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.441 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4917MB free_disk=72.06926345825195GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.442 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.442 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.527 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.528 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 715e08a7-7174-4e14-a83d-67aab18333d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.528 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.528 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.815 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.829 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.831 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:36:38 compute-0 nova_compute[189608]: 2025-11-24 22:36:38.831 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:36:39 compute-0 nova_compute[189608]: 2025-11-24 22:36:39.300 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:40 compute-0 sshd-session[256373]: Invalid user liuj from 185.217.1.246 port 21821
Nov 24 22:36:40 compute-0 sshd-session[256373]: Disconnecting invalid user liuj 185.217.1.246 port 21821: Change of username or service not allowed: (liuj,ssh-connection) -> (fa,ssh-connection) [preauth]
Nov 24 22:36:42 compute-0 nova_compute[189608]: 2025-11-24 22:36:42.621 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:44 compute-0 nova_compute[189608]: 2025-11-24 22:36:44.304 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:44 compute-0 nova_compute[189608]: 2025-11-24 22:36:44.828 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:36:44 compute-0 nova_compute[189608]: 2025-11-24 22:36:44.828 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:36:44 compute-0 nova_compute[189608]: 2025-11-24 22:36:44.829 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:36:45 compute-0 sshd-session[256451]: Invalid user fa from 185.217.1.246 port 47444
Nov 24 22:36:46 compute-0 sshd-session[256451]: Disconnecting invalid user fa 185.217.1.246 port 47444: Change of username or service not allowed: (fa,ssh-connection) -> (instrument,ssh-connection) [preauth]
Nov 24 22:36:46 compute-0 nova_compute[189608]: 2025-11-24 22:36:46.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:36:47 compute-0 nova_compute[189608]: 2025-11-24 22:36:47.626 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:47 compute-0 nova_compute[189608]: 2025-11-24 22:36:47.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:36:48 compute-0 podman[256454]: 2025-11-24 22:36:48.502828946 +0000 UTC m=+0.059931384 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:36:48 compute-0 nova_compute[189608]: 2025-11-24 22:36:48.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:36:49 compute-0 nova_compute[189608]: 2025-11-24 22:36:49.306 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:49 compute-0 nova_compute[189608]: 2025-11-24 22:36:49.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:36:49 compute-0 nova_compute[189608]: 2025-11-24 22:36:49.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:36:52 compute-0 podman[256479]: 2025-11-24 22:36:52.537078749 +0000 UTC m=+0.092281368 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:36:52 compute-0 nova_compute[189608]: 2025-11-24 22:36:52.632 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:54 compute-0 sshd-session[256476]: Invalid user instrument from 185.217.1.246 port 4160
Nov 24 22:36:54 compute-0 podman[256499]: 2025-11-24 22:36:54.171302988 +0000 UTC m=+0.068221531 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 22:36:54 compute-0 nova_compute[189608]: 2025-11-24 22:36:54.312 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:36:54.601 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:36:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:36:54.602 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:36:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:36:54.602 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:36:56 compute-0 sshd-session[256476]: Disconnecting invalid user instrument 185.217.1.246 port 4160: Change of username or service not allowed: (instrument,ssh-connection) -> (tomcat,ssh-connection) [preauth]
Nov 24 22:36:57 compute-0 nova_compute[189608]: 2025-11-24 22:36:57.637 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:59 compute-0 nova_compute[189608]: 2025-11-24 22:36:59.312 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:36:59 compute-0 podman[203795]: time="2025-11-24T22:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:36:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:36:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 24 22:37:01 compute-0 openstack_network_exporter[205945]: ERROR   22:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:37:01 compute-0 openstack_network_exporter[205945]: ERROR   22:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:37:01 compute-0 openstack_network_exporter[205945]: ERROR   22:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:37:01 compute-0 openstack_network_exporter[205945]: ERROR   22:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:37:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:37:01 compute-0 openstack_network_exporter[205945]: ERROR   22:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:37:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:37:01 compute-0 sshd-session[256520]: Invalid user tomcat from 185.217.1.246 port 43420
Nov 24 22:37:02 compute-0 nova_compute[189608]: 2025-11-24 22:37:02.640 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:03 compute-0 podman[256523]: 2025-11-24 22:37:03.529969503 +0000 UTC m=+0.079538553 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Nov 24 22:37:03 compute-0 podman[256529]: 2025-11-24 22:37:03.555811566 +0000 UTC m=+0.096256142 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Nov 24 22:37:03 compute-0 podman[256522]: 2025-11-24 22:37:03.56883607 +0000 UTC m=+0.127423279 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, distribution-scope=public, version=9.4, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Nov 24 22:37:04 compute-0 nova_compute[189608]: 2025-11-24 22:37:04.314 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:05 compute-0 sshd-session[256520]: Disconnecting invalid user tomcat 185.217.1.246 port 43420: Change of username or service not allowed: (tomcat,ssh-connection) -> (seki,ssh-connection) [preauth]
Nov 24 22:37:07 compute-0 podman[256575]: 2025-11-24 22:37:07.53569892 +0000 UTC m=+0.090508914 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:37:07 compute-0 podman[256582]: 2025-11-24 22:37:07.575078844 +0000 UTC m=+0.099244815 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:37:07 compute-0 podman[256576]: 2025-11-24 22:37:07.601960628 +0000 UTC m=+0.139213576 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 22:37:07 compute-0 nova_compute[189608]: 2025-11-24 22:37:07.645 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:09 compute-0 nova_compute[189608]: 2025-11-24 22:37:09.317 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:11 compute-0 sshd-session[256638]: Invalid user seki from 185.217.1.246 port 17996
Nov 24 22:37:12 compute-0 sshd-session[256640]: Connection closed by authenticating user root 193.32.162.145 port 37948 [preauth]
Nov 24 22:37:12 compute-0 nova_compute[189608]: 2025-11-24 22:37:12.651 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:13 compute-0 sshd-session[256638]: Disconnecting invalid user seki 185.217.1.246 port 17996: Change of username or service not allowed: (seki,ssh-connection) -> (hduser,ssh-connection) [preauth]
Nov 24 22:37:14 compute-0 nova_compute[189608]: 2025-11-24 22:37:14.319 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:17 compute-0 nova_compute[189608]: 2025-11-24 22:37:17.656 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:18 compute-0 sshd-session[256642]: Invalid user hduser from 185.217.1.246 port 40107
Nov 24 22:37:18 compute-0 podman[256644]: 2025-11-24 22:37:18.694481878 +0000 UTC m=+0.096984864 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:37:19 compute-0 sshd-session[256642]: Disconnecting invalid user hduser 185.217.1.246 port 40107: Change of username or service not allowed: (hduser,ssh-connection) -> (secret,ssh-connection) [preauth]
Nov 24 22:37:19 compute-0 nova_compute[189608]: 2025-11-24 22:37:19.323 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:22 compute-0 nova_compute[189608]: 2025-11-24 22:37:22.662 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:23 compute-0 podman[256670]: 2025-11-24 22:37:23.539191145 +0000 UTC m=+0.087689286 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 22:37:23 compute-0 sshd-session[256668]: Invalid user secret from 185.217.1.246 port 8456
Nov 24 22:37:24 compute-0 nova_compute[189608]: 2025-11-24 22:37:24.325 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:24 compute-0 sshd-session[256668]: Disconnecting invalid user secret 185.217.1.246 port 8456: Change of username or service not allowed: (secret,ssh-connection) -> (victor,ssh-connection) [preauth]
Nov 24 22:37:24 compute-0 podman[256690]: 2025-11-24 22:37:24.544332547 +0000 UTC m=+0.102089954 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251118)
Nov 24 22:37:27 compute-0 nova_compute[189608]: 2025-11-24 22:37:27.668 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:27 compute-0 sshd-session[256709]: Invalid user victor from 185.217.1.246 port 17643
Nov 24 22:37:29 compute-0 sshd-session[256709]: Disconnecting invalid user victor 185.217.1.246 port 17643: Change of username or service not allowed: (victor,ssh-connection) -> (finance,ssh-connection) [preauth]
Nov 24 22:37:29 compute-0 nova_compute[189608]: 2025-11-24 22:37:29.327 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:29 compute-0 podman[203795]: time="2025-11-24T22:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:37:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:37:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Nov 24 22:37:31 compute-0 openstack_network_exporter[205945]: ERROR   22:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:37:31 compute-0 openstack_network_exporter[205945]: ERROR   22:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:37:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:37:31 compute-0 openstack_network_exporter[205945]: ERROR   22:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:37:31 compute-0 openstack_network_exporter[205945]: ERROR   22:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:37:31 compute-0 openstack_network_exporter[205945]: ERROR   22:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:37:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:37:32 compute-0 nova_compute[189608]: 2025-11-24 22:37:32.672 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:33 compute-0 nova_compute[189608]: 2025-11-24 22:37:33.790 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:37:34 compute-0 nova_compute[189608]: 2025-11-24 22:37:34.330 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:34 compute-0 podman[256714]: 2025-11-24 22:37:34.553443922 +0000 UTC m=+0.101624138 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, io.openshift.expose-services=, version=9.4, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543)
Nov 24 22:37:34 compute-0 podman[256715]: 2025-11-24 22:37:34.57977835 +0000 UTC m=+0.122760586 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal)
Nov 24 22:37:34 compute-0 podman[256716]: 2025-11-24 22:37:34.587725457 +0000 UTC m=+0.125872562 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 22:37:35 compute-0 sshd-session[256712]: Invalid user finance from 185.217.1.246 port 48550
Nov 24 22:37:36 compute-0 nova_compute[189608]: 2025-11-24 22:37:36.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:37:36 compute-0 nova_compute[189608]: 2025-11-24 22:37:36.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:37:37 compute-0 nova_compute[189608]: 2025-11-24 22:37:37.174 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:37:37 compute-0 nova_compute[189608]: 2025-11-24 22:37:37.175 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:37:37 compute-0 nova_compute[189608]: 2025-11-24 22:37:37.176 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:37:37 compute-0 sshd-session[256712]: Disconnecting invalid user finance 185.217.1.246 port 48550: Change of username or service not allowed: (finance,ssh-connection) -> (astra,ssh-connection) [preauth]
Nov 24 22:37:37 compute-0 nova_compute[189608]: 2025-11-24 22:37:37.676 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.401 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Updating instance_info_cache with network_info: [{"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.415 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.416 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.417 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.442 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.442 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.442 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.442 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:37:38 compute-0 podman[256771]: 2025-11-24 22:37:38.503532371 +0000 UTC m=+0.063542386 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.509 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:37:38 compute-0 podman[256773]: 2025-11-24 22:37:38.526424222 +0000 UTC m=+0.074636550 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.574 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.576 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:37:38 compute-0 podman[256772]: 2025-11-24 22:37:38.577475898 +0000 UTC m=+0.132719254 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.632 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.639 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.702 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.703 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:37:38 compute-0 nova_compute[189608]: 2025-11-24 22:37:38.762 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:37:39 compute-0 nova_compute[189608]: 2025-11-24 22:37:39.063 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:37:39 compute-0 nova_compute[189608]: 2025-11-24 22:37:39.065 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4838MB free_disk=72.06925964355469GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:37:39 compute-0 nova_compute[189608]: 2025-11-24 22:37:39.066 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:37:39 compute-0 nova_compute[189608]: 2025-11-24 22:37:39.067 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:37:39 compute-0 nova_compute[189608]: 2025-11-24 22:37:39.228 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:37:39 compute-0 nova_compute[189608]: 2025-11-24 22:37:39.229 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 715e08a7-7174-4e14-a83d-67aab18333d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:37:39 compute-0 nova_compute[189608]: 2025-11-24 22:37:39.229 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:37:39 compute-0 nova_compute[189608]: 2025-11-24 22:37:39.230 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:37:39 compute-0 nova_compute[189608]: 2025-11-24 22:37:39.288 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:37:39 compute-0 nova_compute[189608]: 2025-11-24 22:37:39.309 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:37:39 compute-0 nova_compute[189608]: 2025-11-24 22:37:39.312 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:37:39 compute-0 nova_compute[189608]: 2025-11-24 22:37:39.312 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:37:39 compute-0 nova_compute[189608]: 2025-11-24 22:37:39.331 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:42 compute-0 nova_compute[189608]: 2025-11-24 22:37:42.680 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:44 compute-0 nova_compute[189608]: 2025-11-24 22:37:44.334 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:44 compute-0 sshd-session[256848]: Connection reset by 147.185.132.93 port 62236 [preauth]
Nov 24 22:37:44 compute-0 nova_compute[189608]: 2025-11-24 22:37:44.689 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:37:44 compute-0 nova_compute[189608]: 2025-11-24 22:37:44.690 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:37:44 compute-0 nova_compute[189608]: 2025-11-24 22:37:44.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:37:45 compute-0 sshd-session[256849]: Invalid user astra from 185.217.1.246 port 14388
Nov 24 22:37:45 compute-0 sshd-session[256849]: Disconnecting invalid user astra 185.217.1.246 port 14388: Change of username or service not allowed: (astra,ssh-connection) -> (support,ssh-connection) [preauth]
Nov 24 22:37:46 compute-0 nova_compute[189608]: 2025-11-24 22:37:46.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:37:47 compute-0 nova_compute[189608]: 2025-11-24 22:37:47.685 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:49 compute-0 nova_compute[189608]: 2025-11-24 22:37:49.336 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:49 compute-0 sshd-session[256852]: Invalid user support from 185.217.1.246 port 34047
Nov 24 22:37:49 compute-0 podman[256854]: 2025-11-24 22:37:49.519169427 +0000 UTC m=+0.084118994 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:37:49 compute-0 nova_compute[189608]: 2025-11-24 22:37:49.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:37:49 compute-0 nova_compute[189608]: 2025-11-24 22:37:49.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:37:49 compute-0 nova_compute[189608]: 2025-11-24 22:37:49.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:37:49 compute-0 nova_compute[189608]: 2025-11-24 22:37:49.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:37:52 compute-0 nova_compute[189608]: 2025-11-24 22:37:52.689 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:53 compute-0 sshd-session[256852]: Disconnecting invalid user support 185.217.1.246 port 34047: Change of username or service not allowed: (support,ssh-connection) -> (ftpuser,ssh-connection) [preauth]
Nov 24 22:37:54 compute-0 nova_compute[189608]: 2025-11-24 22:37:54.340 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:54 compute-0 podman[256879]: 2025-11-24 22:37:54.566534491 +0000 UTC m=+0.116376552 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 22:37:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:37:54.602 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:37:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:37:54.603 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:37:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:37:54.604 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:37:54 compute-0 podman[256897]: 2025-11-24 22:37:54.681668472 +0000 UTC m=+0.075035001 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 24 22:37:57 compute-0 nova_compute[189608]: 2025-11-24 22:37:57.695 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:58 compute-0 sshd-session[256916]: Invalid user ftpuser from 185.217.1.246 port 6761
Nov 24 22:37:59 compute-0 nova_compute[189608]: 2025-11-24 22:37:59.342 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:37:59 compute-0 podman[203795]: time="2025-11-24T22:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:37:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:37:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 24 22:38:01 compute-0 openstack_network_exporter[205945]: ERROR   22:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:38:01 compute-0 openstack_network_exporter[205945]: ERROR   22:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:38:01 compute-0 openstack_network_exporter[205945]: ERROR   22:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:38:01 compute-0 openstack_network_exporter[205945]: ERROR   22:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:38:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:38:01 compute-0 openstack_network_exporter[205945]: ERROR   22:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:38:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:38:02 compute-0 nova_compute[189608]: 2025-11-24 22:38:02.697 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:03 compute-0 sshd-session[256916]: Disconnecting invalid user ftpuser 185.217.1.246 port 6761: Change of username or service not allowed: (ftpuser,ssh-connection) -> (csgo,ssh-connection) [preauth]
Nov 24 22:38:04 compute-0 nova_compute[189608]: 2025-11-24 22:38:04.344 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:05 compute-0 podman[256920]: 2025-11-24 22:38:05.539645225 +0000 UTC m=+0.096524491 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.buildah.version=1.33.7, version=9.6, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_id=edpm, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 24 22:38:05 compute-0 podman[256919]: 2025-11-24 22:38:05.54940496 +0000 UTC m=+0.110351813 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543)
Nov 24 22:38:05 compute-0 podman[256921]: 2025-11-24 22:38:05.581982056 +0000 UTC m=+0.132809373 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 22:38:07 compute-0 nova_compute[189608]: 2025-11-24 22:38:07.703 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:09 compute-0 nova_compute[189608]: 2025-11-24 22:38:09.348 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:09 compute-0 podman[256975]: 2025-11-24 22:38:09.578054519 +0000 UTC m=+0.121274864 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:38:09 compute-0 podman[256976]: 2025-11-24 22:38:09.586038798 +0000 UTC m=+0.129189171 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:38:09 compute-0 podman[256977]: 2025-11-24 22:38:09.589405783 +0000 UTC m=+0.120353645 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:38:10 compute-0 sshd-session[256918]: Invalid user csgo from 185.217.1.246 port 45076
Nov 24 22:38:12 compute-0 nova_compute[189608]: 2025-11-24 22:38:12.708 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:13 compute-0 sshd-session[256918]: Disconnecting invalid user csgo 185.217.1.246 port 45076: Change of username or service not allowed: (csgo,ssh-connection) -> (pi,ssh-connection) [preauth]
Nov 24 22:38:14 compute-0 nova_compute[189608]: 2025-11-24 22:38:14.348 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.635 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.635 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.635 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.646 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a3bee9ba-6618-44bd-a443-da9fff6862a9', 'name': 'te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4a6957a775da42c9b535753d6b0279d6', 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'hostId': '81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b', 'status': 'active', 'metadata': {'metering.server_group': 'c6477657-e9b0-476c-83b3-9dc474e946c6'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.650 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '715e08a7-7174-4e14-a83d-67aab18333d8', 'name': 'te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4a6957a775da42c9b535753d6b0279d6', 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'hostId': '81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b', 'status': 'active', 'metadata': {'metering.server_group': 'c6477657-e9b0-476c-83b3-9dc474e946c6'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.651 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.653 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:38:17.652157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.657 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.662 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.663 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.663 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.663 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.664 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.664 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.665 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.665 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.665 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:38:17.664981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.666 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.667 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.667 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.668 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.668 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.669 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:38:17.669318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.685 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.686 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.704 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.704 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.706 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.707 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.707 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.708 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:38:17.707686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 nova_compute[189608]: 2025-11-24 22:38:17.713 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.749 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 30194176 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.750 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.784 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.785 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.787 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.788 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.788 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.789 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.790 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.791 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 1080868082 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.791 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:38:17.790023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.792 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 103540431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.793 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.latency volume: 1116528238 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.794 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.latency volume: 67346772 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.795 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.796 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.796 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.797 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.798 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.799 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:38:17.798074) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.819 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/cpu volume: 334720000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.838 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/cpu volume: 291020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.839 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.839 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.839 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.840 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.840 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 1088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.841 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:38:17.840186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.841 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.842 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.842 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.843 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.843 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.843 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.844 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.844 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.844 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.845 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.845 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.846 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.846 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.847 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.847 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.847 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.848 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.848 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 73207808 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.848 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.849 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.bytes volume: 72876032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.850 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.850 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.851 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.851 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.851 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.851 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:38:17.844308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.851 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:38:17.848172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.852 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 4167302189 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.852 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.852 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.latency volume: 2401662271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.853 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.854 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.854 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.855 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.855 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.855 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.856 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.requests volume: 319 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.856 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.857 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.857 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.857 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.857 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.858 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.858 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.858 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.859 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.859 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.860 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.860 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.860 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.861 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.861 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.861 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.862 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.862 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.862 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.863 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.864 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.864 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.864 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.864 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.865 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:38:17.851925) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:38:17.855045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:38:17.858152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:38:17.860449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:38:17.862454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:38:17.864545) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.866 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.867 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.867 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.868 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.868 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.868 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.868 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.868 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.869 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.869 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.870 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.870 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.870 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:38:17.865898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.870 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:38:17.868556) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.870 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.871 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.872 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.872 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.872 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.872 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.873 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.873 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.873 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.874 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.874 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.874 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.874 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.875 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.875 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.875 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.875 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.876 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.876 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.876 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.876 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.876 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.877 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.877 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/memory.usage volume: 42.21484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.877 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/memory.usage volume: 43.50390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.878 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.878 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.879 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.879 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.879 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.879 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:38:17.870768) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.879 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:38:17.872996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.879 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:38:17.875075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.880 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:38:17.877305) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.880 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.880 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.881 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.881 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.881 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.881 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.882 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.882 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.882 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.882 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.883 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.883 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.883 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:38:17.879521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.883 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.884 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:38:17.882399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.884 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.884 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.884 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.885 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.885 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.885 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.885 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.885 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.886 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.887 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:38:17.885405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.895 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.895 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.895 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.895 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:38:17.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:38:17 compute-0 sshd-session[257040]: Invalid user pi from 185.217.1.246 port 20614
Nov 24 22:38:19 compute-0 nova_compute[189608]: 2025-11-24 22:38:19.351 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:20 compute-0 podman[257044]: 2025-11-24 22:38:20.507152549 +0000 UTC m=+0.070280283 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:38:22 compute-0 sshd-session[257040]: Disconnecting invalid user pi 185.217.1.246 port 20614: Change of username or service not allowed: (pi,ssh-connection) -> (debian,ssh-connection) [preauth]
Nov 24 22:38:22 compute-0 nova_compute[189608]: 2025-11-24 22:38:22.718 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:24 compute-0 nova_compute[189608]: 2025-11-24 22:38:24.354 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:25 compute-0 podman[257069]: 2025-11-24 22:38:25.567195779 +0000 UTC m=+0.113944236 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 22:38:25 compute-0 podman[257070]: 2025-11-24 22:38:25.575205779 +0000 UTC m=+0.120267533 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 24 22:38:27 compute-0 nova_compute[189608]: 2025-11-24 22:38:27.723 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:28 compute-0 sshd-session[257105]: Invalid user sol from 45.148.10.240 port 38862
Nov 24 22:38:28 compute-0 sshd-session[257105]: Connection closed by invalid user sol 45.148.10.240 port 38862 [preauth]
Nov 24 22:38:29 compute-0 nova_compute[189608]: 2025-11-24 22:38:29.356 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:29 compute-0 podman[203795]: time="2025-11-24T22:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:38:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:38:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 24 22:38:31 compute-0 sshd-session[257103]: Invalid user debian from 185.217.1.246 port 1660
Nov 24 22:38:31 compute-0 openstack_network_exporter[205945]: ERROR   22:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:38:31 compute-0 openstack_network_exporter[205945]: ERROR   22:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:38:31 compute-0 openstack_network_exporter[205945]: ERROR   22:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:38:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:38:31 compute-0 openstack_network_exporter[205945]: ERROR   22:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:38:31 compute-0 openstack_network_exporter[205945]: ERROR   22:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:38:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:38:32 compute-0 nova_compute[189608]: 2025-11-24 22:38:32.726 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:34 compute-0 sshd-session[257103]: Disconnecting invalid user debian 185.217.1.246 port 1660: Change of username or service not allowed: (debian,ssh-connection) -> (dbadmin,ssh-connection) [preauth]
Nov 24 22:38:34 compute-0 nova_compute[189608]: 2025-11-24 22:38:34.362 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:36 compute-0 podman[257109]: 2025-11-24 22:38:36.540403647 +0000 UTC m=+0.094224830 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 24 22:38:36 compute-0 podman[257108]: 2025-11-24 22:38:36.555523108 +0000 UTC m=+0.110744005 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-type=git, version=9.4, release-0.7.12=, container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 24 22:38:36 compute-0 podman[257110]: 2025-11-24 22:38:36.570536636 +0000 UTC m=+0.116344049 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 24 22:38:37 compute-0 nova_compute[189608]: 2025-11-24 22:38:37.729 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:37 compute-0 nova_compute[189608]: 2025-11-24 22:38:37.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:38:37 compute-0 nova_compute[189608]: 2025-11-24 22:38:37.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:38:37 compute-0 nova_compute[189608]: 2025-11-24 22:38:37.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:38:38 compute-0 nova_compute[189608]: 2025-11-24 22:38:38.251 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:38:38 compute-0 nova_compute[189608]: 2025-11-24 22:38:38.251 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:38:38 compute-0 nova_compute[189608]: 2025-11-24 22:38:38.252 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:38:38 compute-0 nova_compute[189608]: 2025-11-24 22:38:38.252 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid a3bee9ba-6618-44bd-a443-da9fff6862a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.365 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.568 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updating instance_info_cache with network_info: [{"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.582 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.583 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.584 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.605 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.606 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.606 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.606 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.689 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.764 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.766 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.831 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.841 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.903 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.904 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:38:39 compute-0 nova_compute[189608]: 2025-11-24 22:38:39.991 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:38:40 compute-0 nova_compute[189608]: 2025-11-24 22:38:40.313 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:38:40 compute-0 nova_compute[189608]: 2025-11-24 22:38:40.314 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4841MB free_disk=72.06925964355469GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:38:40 compute-0 nova_compute[189608]: 2025-11-24 22:38:40.315 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:38:40 compute-0 nova_compute[189608]: 2025-11-24 22:38:40.315 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:38:40 compute-0 nova_compute[189608]: 2025-11-24 22:38:40.400 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:38:40 compute-0 nova_compute[189608]: 2025-11-24 22:38:40.401 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 715e08a7-7174-4e14-a83d-67aab18333d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:38:40 compute-0 nova_compute[189608]: 2025-11-24 22:38:40.401 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:38:40 compute-0 nova_compute[189608]: 2025-11-24 22:38:40.401 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:38:40 compute-0 sshd-session[257107]: Invalid user dbadmin from 185.217.1.246 port 39325
Nov 24 22:38:40 compute-0 podman[257177]: 2025-11-24 22:38:40.465886568 +0000 UTC m=+0.057852977 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:38:40 compute-0 nova_compute[189608]: 2025-11-24 22:38:40.467 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:38:40 compute-0 podman[257179]: 2025-11-24 22:38:40.47462242 +0000 UTC m=+0.055209894 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 22:38:40 compute-0 nova_compute[189608]: 2025-11-24 22:38:40.489 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:38:40 compute-0 nova_compute[189608]: 2025-11-24 22:38:40.490 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:38:40 compute-0 nova_compute[189608]: 2025-11-24 22:38:40.491 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:38:40 compute-0 podman[257178]: 2025-11-24 22:38:40.53010028 +0000 UTC m=+0.112564862 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:38:41 compute-0 sshd-session[257107]: Disconnecting invalid user dbadmin 185.217.1.246 port 39325: Change of username or service not allowed: (dbadmin,ssh-connection) -> (hacluster,ssh-connection) [preauth]
Nov 24 22:38:42 compute-0 nova_compute[189608]: 2025-11-24 22:38:42.734 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:44 compute-0 nova_compute[189608]: 2025-11-24 22:38:44.366 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:44 compute-0 nova_compute[189608]: 2025-11-24 22:38:44.699 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:38:45 compute-0 nova_compute[189608]: 2025-11-24 22:38:45.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:38:45 compute-0 nova_compute[189608]: 2025-11-24 22:38:45.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:38:46 compute-0 sshd-session[257242]: Invalid user hacluster from 185.217.1.246 port 63276
Nov 24 22:38:46 compute-0 sshd-session[257242]: Disconnecting invalid user hacluster 185.217.1.246 port 63276: Change of username or service not allowed: (hacluster,ssh-connection) -> (minecraft,ssh-connection) [preauth]
Nov 24 22:38:47 compute-0 nova_compute[189608]: 2025-11-24 22:38:47.739 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:48 compute-0 nova_compute[189608]: 2025-11-24 22:38:48.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:38:49 compute-0 nova_compute[189608]: 2025-11-24 22:38:49.369 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:49 compute-0 sshd-session[257244]: Invalid user minecraft from 185.217.1.246 port 18089
Nov 24 22:38:49 compute-0 sshd-session[257244]: Disconnecting invalid user minecraft 185.217.1.246 port 18089: Change of username or service not allowed: (minecraft,ssh-connection) -> (user1,ssh-connection) [preauth]
Nov 24 22:38:49 compute-0 nova_compute[189608]: 2025-11-24 22:38:49.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:38:49 compute-0 nova_compute[189608]: 2025-11-24 22:38:49.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:38:51 compute-0 podman[257247]: 2025-11-24 22:38:51.547023821 +0000 UTC m=+0.102375845 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 24 22:38:51 compute-0 nova_compute[189608]: 2025-11-24 22:38:51.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:38:51 compute-0 nova_compute[189608]: 2025-11-24 22:38:51.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:38:52 compute-0 nova_compute[189608]: 2025-11-24 22:38:52.745 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:54 compute-0 nova_compute[189608]: 2025-11-24 22:38:54.372 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:38:54.604 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:38:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:38:54.605 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:38:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:38:54.606 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:38:56 compute-0 podman[257272]: 2025-11-24 22:38:56.52114771 +0000 UTC m=+0.077828428 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 24 22:38:56 compute-0 podman[257273]: 2025-11-24 22:38:56.53686986 +0000 UTC m=+0.084533978 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 24 22:38:57 compute-0 sshd-session[257270]: Invalid user user1 from 185.217.1.246 port 36708
Nov 24 22:38:57 compute-0 nova_compute[189608]: 2025-11-24 22:38:57.749 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:59 compute-0 nova_compute[189608]: 2025-11-24 22:38:59.373 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:38:59 compute-0 sshd-session[257270]: Disconnecting invalid user user1 185.217.1.246 port 36708: Change of username or service not allowed: (user1,ssh-connection) -> (hugo,ssh-connection) [preauth]
Nov 24 22:38:59 compute-0 podman[203795]: time="2025-11-24T22:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:38:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:38:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 24 22:39:01 compute-0 openstack_network_exporter[205945]: ERROR   22:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:39:01 compute-0 openstack_network_exporter[205945]: ERROR   22:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:39:01 compute-0 openstack_network_exporter[205945]: ERROR   22:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:39:01 compute-0 openstack_network_exporter[205945]: ERROR   22:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:39:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:39:01 compute-0 openstack_network_exporter[205945]: ERROR   22:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:39:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:39:02 compute-0 nova_compute[189608]: 2025-11-24 22:39:02.751 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:03 compute-0 sshd-session[257312]: Invalid user hugo from 185.217.1.246 port 10039
Nov 24 22:39:04 compute-0 nova_compute[189608]: 2025-11-24 22:39:04.377 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:06 compute-0 sshd-session[257312]: Disconnecting invalid user hugo 185.217.1.246 port 10039: Change of username or service not allowed: (hugo,ssh-connection) -> (cisco,ssh-connection) [preauth]
Nov 24 22:39:07 compute-0 podman[257316]: 2025-11-24 22:39:07.558923377 +0000 UTC m=+0.095785909 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 24 22:39:07 compute-0 podman[257315]: 2025-11-24 22:39:07.586647811 +0000 UTC m=+0.119713315 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, vcs-type=git, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 24 22:39:07 compute-0 podman[257314]: 2025-11-24 22:39:07.602434404 +0000 UTC m=+0.141887257 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, architecture=x86_64, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler)
Nov 24 22:39:07 compute-0 nova_compute[189608]: 2025-11-24 22:39:07.756 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:09 compute-0 nova_compute[189608]: 2025-11-24 22:39:09.379 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:11 compute-0 podman[257377]: 2025-11-24 22:39:11.579125522 +0000 UTC m=+0.115720701 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 24 22:39:11 compute-0 podman[257375]: 2025-11-24 22:39:11.585991586 +0000 UTC m=+0.129856722 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:39:11 compute-0 podman[257376]: 2025-11-24 22:39:11.609637033 +0000 UTC m=+0.145903442 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:39:12 compute-0 nova_compute[189608]: 2025-11-24 22:39:12.760 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:13 compute-0 sshd-session[257373]: Invalid user cisco from 185.217.1.246 port 47112
Nov 24 22:39:13 compute-0 sshd-session[257373]: Disconnecting invalid user cisco 185.217.1.246 port 47112: Change of username or service not allowed: (cisco,ssh-connection) -> (kali,ssh-connection) [preauth]
Nov 24 22:39:14 compute-0 nova_compute[189608]: 2025-11-24 22:39:14.384 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:17 compute-0 sshd-session[257442]: Invalid user kali from 185.217.1.246 port 2584
Nov 24 22:39:17 compute-0 nova_compute[189608]: 2025-11-24 22:39:17.762 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:19 compute-0 sshd-session[257442]: Disconnecting invalid user kali 185.217.1.246 port 2584: Change of username or service not allowed: (kali,ssh-connection) -> (123456,ssh-connection) [preauth]
Nov 24 22:39:19 compute-0 nova_compute[189608]: 2025-11-24 22:39:19.388 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:22 compute-0 podman[257447]: 2025-11-24 22:39:22.519701967 +0000 UTC m=+0.081986078 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:39:22 compute-0 nova_compute[189608]: 2025-11-24 22:39:22.769 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:24 compute-0 nova_compute[189608]: 2025-11-24 22:39:24.390 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:26 compute-0 sshd-session[257446]: Invalid user 123456 from 185.217.1.246 port 27149
Nov 24 22:39:27 compute-0 sshd-session[257446]: Disconnecting invalid user 123456 185.217.1.246 port 27149: Change of username or service not allowed: (123456,ssh-connection) -> (sftp_user,ssh-connection) [preauth]
Nov 24 22:39:27 compute-0 podman[257471]: 2025-11-24 22:39:27.548503494 +0000 UTC m=+0.100979341 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd)
Nov 24 22:39:27 compute-0 podman[257472]: 2025-11-24 22:39:27.567716033 +0000 UTC m=+0.116461243 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true)
Nov 24 22:39:27 compute-0 nova_compute[189608]: 2025-11-24 22:39:27.774 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:29 compute-0 nova_compute[189608]: 2025-11-24 22:39:29.398 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:29 compute-0 podman[203795]: time="2025-11-24T22:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:39:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:39:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 24 22:39:31 compute-0 openstack_network_exporter[205945]: ERROR   22:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:39:31 compute-0 openstack_network_exporter[205945]: ERROR   22:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:39:31 compute-0 openstack_network_exporter[205945]: ERROR   22:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:39:31 compute-0 openstack_network_exporter[205945]: ERROR   22:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:39:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:39:31 compute-0 openstack_network_exporter[205945]: ERROR   22:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:39:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:39:32 compute-0 sshd-session[257507]: Invalid user sftp_user from 185.217.1.246 port 53410
Nov 24 22:39:32 compute-0 sshd-session[257507]: Disconnecting invalid user sftp_user 185.217.1.246 port 53410: Change of username or service not allowed: (sftp_user,ssh-connection) -> (webapp,ssh-connection) [preauth]
Nov 24 22:39:32 compute-0 nova_compute[189608]: 2025-11-24 22:39:32.781 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:34 compute-0 nova_compute[189608]: 2025-11-24 22:39:34.401 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:37 compute-0 nova_compute[189608]: 2025-11-24 22:39:37.786 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:37 compute-0 nova_compute[189608]: 2025-11-24 22:39:37.788 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:39:37 compute-0 nova_compute[189608]: 2025-11-24 22:39:37.815 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:39:37 compute-0 nova_compute[189608]: 2025-11-24 22:39:37.847 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:39:37 compute-0 nova_compute[189608]: 2025-11-24 22:39:37.848 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:39:37 compute-0 nova_compute[189608]: 2025-11-24 22:39:37.849 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:39:37 compute-0 nova_compute[189608]: 2025-11-24 22:39:37.850 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:39:37 compute-0 nova_compute[189608]: 2025-11-24 22:39:37.976 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.077 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.080 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.178 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.191 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.291 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.293 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.355 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:39:38 compute-0 podman[257524]: 2025-11-24 22:39:38.576334498 +0000 UTC m=+0.123273526 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, release=1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, version=9.4, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Nov 24 22:39:38 compute-0 podman[257525]: 2025-11-24 22:39:38.579995923 +0000 UTC m=+0.120885972 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41)
Nov 24 22:39:38 compute-0 podman[257526]: 2025-11-24 22:39:38.59371605 +0000 UTC m=+0.119742976 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:39:38 compute-0 sshd-session[257509]: Invalid user webapp from 185.217.1.246 port 19833
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.830 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.831 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4836MB free_disk=72.06925582885742GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.831 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.832 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.913 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.914 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 715e08a7-7174-4e14-a83d-67aab18333d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.914 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.914 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.985 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:39:38 compute-0 nova_compute[189608]: 2025-11-24 22:39:38.998 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:39:39 compute-0 nova_compute[189608]: 2025-11-24 22:39:39.000 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:39:39 compute-0 nova_compute[189608]: 2025-11-24 22:39:39.000 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:39:39 compute-0 nova_compute[189608]: 2025-11-24 22:39:39.404 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:39 compute-0 nova_compute[189608]: 2025-11-24 22:39:39.978 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:39:39 compute-0 nova_compute[189608]: 2025-11-24 22:39:39.979 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:39:40 compute-0 nova_compute[189608]: 2025-11-24 22:39:40.269 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:39:40 compute-0 nova_compute[189608]: 2025-11-24 22:39:40.270 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:39:40 compute-0 nova_compute[189608]: 2025-11-24 22:39:40.271 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:39:40 compute-0 sshd-session[257509]: Disconnecting invalid user webapp 185.217.1.246 port 19833: Change of username or service not allowed: (webapp,ssh-connection) -> (abc,ssh-connection) [preauth]
Nov 24 22:39:42 compute-0 podman[257579]: 2025-11-24 22:39:42.55988042 +0000 UTC m=+0.099907247 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 24 22:39:42 compute-0 podman[257581]: 2025-11-24 22:39:42.567386424 +0000 UTC m=+0.089527834 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 22:39:42 compute-0 podman[257580]: 2025-11-24 22:39:42.588171023 +0000 UTC m=+0.134691163 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 22:39:42 compute-0 nova_compute[189608]: 2025-11-24 22:39:42.726 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Updating instance_info_cache with network_info: [{"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:39:42 compute-0 nova_compute[189608]: 2025-11-24 22:39:42.745 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:39:42 compute-0 nova_compute[189608]: 2025-11-24 22:39:42.746 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:39:42 compute-0 nova_compute[189608]: 2025-11-24 22:39:42.788 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:44 compute-0 nova_compute[189608]: 2025-11-24 22:39:44.408 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:45 compute-0 nova_compute[189608]: 2025-11-24 22:39:45.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:39:46 compute-0 nova_compute[189608]: 2025-11-24 22:39:46.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:39:47 compute-0 nova_compute[189608]: 2025-11-24 22:39:47.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:39:47 compute-0 nova_compute[189608]: 2025-11-24 22:39:47.793 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:48 compute-0 sshd-session[257643]: Invalid user abc from 185.217.1.246 port 46836
Nov 24 22:39:49 compute-0 nova_compute[189608]: 2025-11-24 22:39:49.411 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:50 compute-0 nova_compute[189608]: 2025-11-24 22:39:50.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:39:50 compute-0 sshd-session[257643]: Disconnecting invalid user abc 185.217.1.246 port 46836: Change of username or service not allowed: (abc,ssh-connection) -> (manager,ssh-connection) [preauth]
Nov 24 22:39:51 compute-0 nova_compute[189608]: 2025-11-24 22:39:51.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:39:51 compute-0 nova_compute[189608]: 2025-11-24 22:39:51.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:39:51 compute-0 nova_compute[189608]: 2025-11-24 22:39:51.796 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:39:52 compute-0 nova_compute[189608]: 2025-11-24 22:39:52.797 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:39:52 compute-0 nova_compute[189608]: 2025-11-24 22:39:52.799 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:53 compute-0 podman[257647]: 2025-11-24 22:39:53.601331048 +0000 UTC m=+0.131707599 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:39:54 compute-0 nova_compute[189608]: 2025-11-24 22:39:54.414 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:39:54.605 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:39:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:39:54.606 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:39:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:39:54.607 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:39:57 compute-0 sshd-session[257671]: Invalid user manager from 185.217.1.246 port 25011
Nov 24 22:39:57 compute-0 sshd-session[257671]: Disconnecting invalid user manager 185.217.1.246 port 25011: Change of username or service not allowed: (manager,ssh-connection) -> (service,ssh-connection) [preauth]
Nov 24 22:39:57 compute-0 nova_compute[189608]: 2025-11-24 22:39:57.802 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:58 compute-0 podman[257674]: 2025-11-24 22:39:58.550374725 +0000 UTC m=+0.092071712 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 24 22:39:58 compute-0 podman[257673]: 2025-11-24 22:39:58.562126412 +0000 UTC m=+0.105444510 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 22:39:59 compute-0 nova_compute[189608]: 2025-11-24 22:39:59.417 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:39:59 compute-0 podman[203795]: time="2025-11-24T22:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:39:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:39:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 24 22:40:01 compute-0 openstack_network_exporter[205945]: ERROR   22:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:40:01 compute-0 openstack_network_exporter[205945]: ERROR   22:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:40:01 compute-0 openstack_network_exporter[205945]: ERROR   22:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:40:01 compute-0 openstack_network_exporter[205945]: ERROR   22:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:40:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:40:01 compute-0 openstack_network_exporter[205945]: ERROR   22:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:40:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:40:02 compute-0 nova_compute[189608]: 2025-11-24 22:40:02.805 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:04 compute-0 sshd-session[257711]: Invalid user service from 185.217.1.246 port 44602
Nov 24 22:40:04 compute-0 nova_compute[189608]: 2025-11-24 22:40:04.419 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:05 compute-0 sshd-session[257711]: Disconnecting invalid user service 185.217.1.246 port 44602: Change of username or service not allowed: (service,ssh-connection) -> (timothy,ssh-connection) [preauth]
Nov 24 22:40:07 compute-0 nova_compute[189608]: 2025-11-24 22:40:07.809 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:09 compute-0 nova_compute[189608]: 2025-11-24 22:40:09.421 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:09 compute-0 podman[257716]: 2025-11-24 22:40:09.537907117 +0000 UTC m=+0.077863540 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:40:09 compute-0 podman[257714]: 2025-11-24 22:40:09.550436108 +0000 UTC m=+0.094580851 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, io.openshift.expose-services=, config_id=edpm, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 24 22:40:09 compute-0 podman[257715]: 2025-11-24 22:40:09.556468777 +0000 UTC m=+0.108751323 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, config_id=edpm, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Nov 24 22:40:12 compute-0 sshd-session[257713]: Invalid user timothy from 185.217.1.246 port 18308
Nov 24 22:40:12 compute-0 podman[257772]: 2025-11-24 22:40:12.785717308 +0000 UTC m=+0.114816992 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 22:40:12 compute-0 podman[257770]: 2025-11-24 22:40:12.789485826 +0000 UTC m=+0.133696661 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:40:12 compute-0 nova_compute[189608]: 2025-11-24 22:40:12.811 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:12 compute-0 podman[257771]: 2025-11-24 22:40:12.829546316 +0000 UTC m=+0.163578564 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 22:40:13 compute-0 sshd-session[257713]: Disconnecting invalid user timothy 185.217.1.246 port 18308: Change of username or service not allowed: (timothy,ssh-connection) -> (ftp,ssh-connection) [preauth]
Nov 24 22:40:14 compute-0 nova_compute[189608]: 2025-11-24 22:40:14.425 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.636 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.636 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.637 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.639 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.643 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.643 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.643 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.644 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.644 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.644 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.644 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c56d7bef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.649 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a3bee9ba-6618-44bd-a443-da9fff6862a9', 'name': 'te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4a6957a775da42c9b535753d6b0279d6', 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'hostId': '81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b', 'status': 'active', 'metadata': {'metering.server_group': 'c6477657-e9b0-476c-83b3-9dc474e946c6'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.655 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '715e08a7-7174-4e14-a83d-67aab18333d8', 'name': 'te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4a6957a775da42c9b535753d6b0279d6', 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'hostId': '81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b', 'status': 'active', 'metadata': {'metering.server_group': 'c6477657-e9b0-476c-83b3-9dc474e946c6'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.655 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.655 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:40:17.656080) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.661 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.666 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.667 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.667 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.667 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.667 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.667 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.667 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.668 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.668 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:40:17.667686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.669 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.669 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.669 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:40:17.669724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.688 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.688 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.706 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.707 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.707 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.708 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.708 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:40:17.708721) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.766 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 30194176 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.767 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.814 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.815 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 nova_compute[189608]: 2025-11-24 22:40:17.815 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.816 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.817 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.817 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.817 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.817 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.818 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.818 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 1080868082 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.818 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 103540431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.819 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.latency volume: 1147589020 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.820 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:40:17.818127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.820 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.latency volume: 75679092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.821 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.821 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.822 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.822 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.822 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.822 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.824 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:40:17.822859) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.856 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/cpu volume: 336480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.878 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/cpu volume: 334860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.878 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.879 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.879 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.879 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.879 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.879 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 1088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.879 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.880 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.880 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.880 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.880 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.880 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.880 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.880 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.881 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.881 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.881 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.881 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.881 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.882 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.882 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.882 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.882 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.882 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.882 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.882 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 73207808 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.882 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.883 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.bytes volume: 73183232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.883 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.883 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.883 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.884 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.884 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.884 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.884 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 4167302189 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.884 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.884 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.latency volume: 3127398948 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.885 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.885 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.885 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.885 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.885 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.885 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.885 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.885 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.886 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.886 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.requests volume: 344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.886 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.886 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.887 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.887 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.887 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.887 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.887 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.887 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.887 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.888 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.888 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.888 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.888 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.888 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.888 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.888 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.889 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.889 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.889 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.889 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.889 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.890 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.890 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.890 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.890 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.890 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.891 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.891 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.891 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.891 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.891 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.891 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.892 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.892 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.892 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.892 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.892 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.892 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.893 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.893 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.893 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.893 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.894 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.894 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.894 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.894 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.895 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.895 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.895 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.895 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.896 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.896 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.896 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.896 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.896 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.896 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.896 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.897 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.897 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.897 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/memory.usage volume: 42.21484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.897 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/memory.usage volume: 42.42578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.897 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.898 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.898 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.898 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.898 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.899 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.899 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.899 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.900 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.900 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.900 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.900 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.901 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.901 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.889 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:40:17.879271) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:40:17.881033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:40:17.882669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:40:17.884264) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:40:17.885818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:40:17.887268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:40:17.888400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:40:17.889466) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:40:17.890613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:40:17.891523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:40:17.892813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:40:17.893897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:40:17.894946) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:40:17.896009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:40:17.897085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:40:17.898232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:40:17.899432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:40:17.907 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:40:17.900914) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:40:19 compute-0 nova_compute[189608]: 2025-11-24 22:40:19.428 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:20 compute-0 sshd-session[257842]: Invalid user ubuntu from 193.32.162.145 port 48642
Nov 24 22:40:20 compute-0 sshd-session[257842]: Connection closed by invalid user ubuntu 193.32.162.145 port 48642 [preauth]
Nov 24 22:40:22 compute-0 nova_compute[189608]: 2025-11-24 22:40:22.818 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:23 compute-0 sshd-session[257838]: Disconnecting authenticating user ftp 185.217.1.246 port 45577: Change of username or service not allowed: (ftp,ssh-connection) -> (default,ssh-connection) [preauth]
Nov 24 22:40:24 compute-0 nova_compute[189608]: 2025-11-24 22:40:24.431 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:24 compute-0 podman[257844]: 2025-11-24 22:40:24.534331652 +0000 UTC m=+0.098659089 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:40:27 compute-0 nova_compute[189608]: 2025-11-24 22:40:27.824 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:27 compute-0 sshd-session[257868]: Invalid user sol from 45.148.10.240 port 60718
Nov 24 22:40:27 compute-0 sshd-session[257868]: Connection closed by invalid user sol 45.148.10.240 port 60718 [preauth]
Nov 24 22:40:29 compute-0 nova_compute[189608]: 2025-11-24 22:40:29.433 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:29 compute-0 podman[257871]: 2025-11-24 22:40:29.58813269 +0000 UTC m=+0.135569530 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 22:40:29 compute-0 podman[257872]: 2025-11-24 22:40:29.615616207 +0000 UTC m=+0.153626092 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:40:29 compute-0 podman[203795]: time="2025-11-24T22:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:40:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:40:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 24 22:40:30 compute-0 sshd-session[257867]: Invalid user default from 185.217.1.246 port 19930
Nov 24 22:40:31 compute-0 openstack_network_exporter[205945]: ERROR   22:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:40:31 compute-0 openstack_network_exporter[205945]: ERROR   22:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:40:31 compute-0 openstack_network_exporter[205945]: ERROR   22:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:40:31 compute-0 openstack_network_exporter[205945]: ERROR   22:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:40:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:40:31 compute-0 openstack_network_exporter[205945]: ERROR   22:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:40:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:40:32 compute-0 nova_compute[189608]: 2025-11-24 22:40:32.829 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:34 compute-0 nova_compute[189608]: 2025-11-24 22:40:34.436 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:34 compute-0 sshd-session[257867]: Disconnecting invalid user default 185.217.1.246 port 19930: Change of username or service not allowed: (default,ssh-connection) -> (guest,ssh-connection) [preauth]
Nov 24 22:40:37 compute-0 nova_compute[189608]: 2025-11-24 22:40:37.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:40:37 compute-0 nova_compute[189608]: 2025-11-24 22:40:37.828 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:40:37 compute-0 nova_compute[189608]: 2025-11-24 22:40:37.829 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:40:37 compute-0 nova_compute[189608]: 2025-11-24 22:40:37.829 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:40:37 compute-0 nova_compute[189608]: 2025-11-24 22:40:37.830 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:40:37 compute-0 nova_compute[189608]: 2025-11-24 22:40:37.835 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:37 compute-0 nova_compute[189608]: 2025-11-24 22:40:37.948 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.048 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.056 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.181 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.192 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.258 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.260 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.324 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.743 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.746 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4838MB free_disk=72.0693588256836GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.746 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.747 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.899 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.900 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 715e08a7-7174-4e14-a83d-67aab18333d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.902 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.903 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:40:38 compute-0 nova_compute[189608]: 2025-11-24 22:40:38.977 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing inventories for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 22:40:39 compute-0 nova_compute[189608]: 2025-11-24 22:40:39.035 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating ProviderTree inventory for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 22:40:39 compute-0 nova_compute[189608]: 2025-11-24 22:40:39.035 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Updating inventory in ProviderTree for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 22:40:39 compute-0 nova_compute[189608]: 2025-11-24 22:40:39.052 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing aggregate associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 22:40:39 compute-0 nova_compute[189608]: 2025-11-24 22:40:39.071 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Refreshing trait associations for resource provider 7680d048-14f1-46f8-a34d-a7eb32eb11df, traits: HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 22:40:39 compute-0 nova_compute[189608]: 2025-11-24 22:40:39.145 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:40:39 compute-0 nova_compute[189608]: 2025-11-24 22:40:39.158 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:40:39 compute-0 nova_compute[189608]: 2025-11-24 22:40:39.160 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:40:39 compute-0 nova_compute[189608]: 2025-11-24 22:40:39.160 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:40:39 compute-0 nova_compute[189608]: 2025-11-24 22:40:39.441 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:40 compute-0 nova_compute[189608]: 2025-11-24 22:40:40.161 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:40:40 compute-0 nova_compute[189608]: 2025-11-24 22:40:40.161 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:40:40 compute-0 nova_compute[189608]: 2025-11-24 22:40:40.162 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:40:40 compute-0 podman[257921]: 2025-11-24 22:40:40.566104247 +0000 UTC m=+0.103049345 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, maintainer=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Nov 24 22:40:40 compute-0 podman[257920]: 2025-11-24 22:40:40.578761502 +0000 UTC m=+0.120205370 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git)
Nov 24 22:40:40 compute-0 podman[257922]: 2025-11-24 22:40:40.599327833 +0000 UTC m=+0.123778391 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 22:40:41 compute-0 nova_compute[189608]: 2025-11-24 22:40:41.288 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:40:41 compute-0 nova_compute[189608]: 2025-11-24 22:40:41.288 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:40:41 compute-0 nova_compute[189608]: 2025-11-24 22:40:41.288 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:40:41 compute-0 nova_compute[189608]: 2025-11-24 22:40:41.289 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid a3bee9ba-6618-44bd-a443-da9fff6862a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:40:42 compute-0 sshd-session[257907]: Invalid user guest from 185.217.1.246 port 59931
Nov 24 22:40:42 compute-0 nova_compute[189608]: 2025-11-24 22:40:42.840 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:43 compute-0 podman[257978]: 2025-11-24 22:40:43.576004028 +0000 UTC m=+0.120144078 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:40:43 compute-0 podman[257979]: 2025-11-24 22:40:43.61386645 +0000 UTC m=+0.160669523 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 22:40:43 compute-0 podman[257980]: 2025-11-24 22:40:43.614697605 +0000 UTC m=+0.145060606 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:40:44 compute-0 nova_compute[189608]: 2025-11-24 22:40:44.025 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updating instance_info_cache with network_info: [{"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:40:44 compute-0 nova_compute[189608]: 2025-11-24 22:40:44.046 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:40:44 compute-0 nova_compute[189608]: 2025-11-24 22:40:44.047 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:40:44 compute-0 nova_compute[189608]: 2025-11-24 22:40:44.444 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:46 compute-0 sshd-session[257907]: Disconnecting invalid user guest 185.217.1.246 port 59931: Change of username or service not allowed: (guest,ssh-connection) -> (user01,ssh-connection) [preauth]
Nov 24 22:40:46 compute-0 nova_compute[189608]: 2025-11-24 22:40:46.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:40:47 compute-0 nova_compute[189608]: 2025-11-24 22:40:47.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:40:47 compute-0 nova_compute[189608]: 2025-11-24 22:40:47.843 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:48 compute-0 nova_compute[189608]: 2025-11-24 22:40:48.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:40:49 compute-0 nova_compute[189608]: 2025-11-24 22:40:49.446 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:50 compute-0 sshd-session[258042]: Invalid user user01 from 185.217.1.246 port 36802
Nov 24 22:40:51 compute-0 nova_compute[189608]: 2025-11-24 22:40:51.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:40:51 compute-0 nova_compute[189608]: 2025-11-24 22:40:51.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:40:52 compute-0 sshd-session[258042]: Disconnecting invalid user user01 185.217.1.246 port 36802: Change of username or service not allowed: (user01,ssh-connection) -> (mc1,ssh-connection) [preauth]
Nov 24 22:40:52 compute-0 nova_compute[189608]: 2025-11-24 22:40:52.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:40:52 compute-0 nova_compute[189608]: 2025-11-24 22:40:52.848 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:53 compute-0 nova_compute[189608]: 2025-11-24 22:40:53.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:40:54 compute-0 nova_compute[189608]: 2025-11-24 22:40:54.449 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:40:54.606 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:40:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:40:54.607 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:40:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:40:54.607 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:40:54 compute-0 nova_compute[189608]: 2025-11-24 22:40:54.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:40:55 compute-0 podman[258047]: 2025-11-24 22:40:55.540133482 +0000 UTC m=+0.092906719 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:40:56 compute-0 sshd-session[258045]: Invalid user mc1 from 185.217.1.246 port 61166
Nov 24 22:40:57 compute-0 sshd-session[258045]: Disconnecting invalid user mc1 185.217.1.246 port 61166: Change of username or service not allowed: (mc1,ssh-connection) -> (cq,ssh-connection) [preauth]
Nov 24 22:40:57 compute-0 nova_compute[189608]: 2025-11-24 22:40:57.853 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:58 compute-0 nova_compute[189608]: 2025-11-24 22:40:58.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:40:58 compute-0 nova_compute[189608]: 2025-11-24 22:40:58.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 22:40:58 compute-0 nova_compute[189608]: 2025-11-24 22:40:58.812 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 22:40:59 compute-0 nova_compute[189608]: 2025-11-24 22:40:59.452 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:40:59 compute-0 podman[203795]: time="2025-11-24T22:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:40:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:40:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Nov 24 22:40:59 compute-0 nova_compute[189608]: 2025-11-24 22:40:59.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:40:59 compute-0 nova_compute[189608]: 2025-11-24 22:40:59.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 22:41:00 compute-0 podman[258073]: 2025-11-24 22:41:00.514048579 +0000 UTC m=+0.067085913 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:41:00 compute-0 podman[258074]: 2025-11-24 22:41:00.521248533 +0000 UTC m=+0.071445999 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 22:41:01 compute-0 openstack_network_exporter[205945]: ERROR   22:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:41:01 compute-0 openstack_network_exporter[205945]: ERROR   22:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:41:01 compute-0 openstack_network_exporter[205945]: ERROR   22:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:41:01 compute-0 openstack_network_exporter[205945]: ERROR   22:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:41:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:41:01 compute-0 openstack_network_exporter[205945]: ERROR   22:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:41:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:41:01 compute-0 anacron[237240]: Job `cron.weekly' started
Nov 24 22:41:01 compute-0 anacron[237240]: Job `cron.weekly' terminated
Nov 24 22:41:02 compute-0 sshd-session[258071]: Invalid user cq from 185.217.1.246 port 25691
Nov 24 22:41:02 compute-0 nova_compute[189608]: 2025-11-24 22:41:02.855 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:03 compute-0 sshd-session[258071]: Disconnecting invalid user cq 185.217.1.246 port 25691: Change of username or service not allowed: (cq,ssh-connection) -> (tmax,ssh-connection) [preauth]
Nov 24 22:41:04 compute-0 nova_compute[189608]: 2025-11-24 22:41:04.459 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:07 compute-0 nova_compute[189608]: 2025-11-24 22:41:07.859 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:09 compute-0 nova_compute[189608]: 2025-11-24 22:41:09.464 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:11 compute-0 sshd-session[258116]: Invalid user tmax from 185.217.1.246 port 50653
Nov 24 22:41:11 compute-0 podman[258120]: 2025-11-24 22:41:11.519325614 +0000 UTC m=+0.071658416 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:41:11 compute-0 podman[258119]: 2025-11-24 22:41:11.521867464 +0000 UTC m=+0.076876170 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-type=git, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 22:41:11 compute-0 podman[258118]: 2025-11-24 22:41:11.53426405 +0000 UTC m=+0.095961334 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.openshift.expose-services=, release-0.7.12=, config_id=edpm, io.buildah.version=1.29.0, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container)
Nov 24 22:41:12 compute-0 sshd-session[258116]: Disconnecting invalid user tmax 185.217.1.246 port 50653: Change of username or service not allowed: (tmax,ssh-connection) -> (wss,ssh-connection) [preauth]
Nov 24 22:41:12 compute-0 nova_compute[189608]: 2025-11-24 22:41:12.864 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:13 compute-0 nova_compute[189608]: 2025-11-24 22:41:13.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:41:14 compute-0 nova_compute[189608]: 2025-11-24 22:41:14.468 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:14 compute-0 podman[258178]: 2025-11-24 22:41:14.53613201 +0000 UTC m=+0.099167774 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:41:14 compute-0 podman[258180]: 2025-11-24 22:41:14.553640176 +0000 UTC m=+0.106138581 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 22:41:14 compute-0 podman[258179]: 2025-11-24 22:41:14.591937161 +0000 UTC m=+0.149764282 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:41:17 compute-0 nova_compute[189608]: 2025-11-24 22:41:17.867 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:18 compute-0 sshd-session[258176]: Invalid user wss from 185.217.1.246 port 11788
Nov 24 22:41:18 compute-0 sshd-session[258176]: Disconnecting invalid user wss 185.217.1.246 port 11788: Change of username or service not allowed: (wss,ssh-connection) -> (sync,ssh-connection) [preauth]
Nov 24 22:41:19 compute-0 nova_compute[189608]: 2025-11-24 22:41:19.471 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:22 compute-0 nova_compute[189608]: 2025-11-24 22:41:22.872 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:24 compute-0 nova_compute[189608]: 2025-11-24 22:41:24.473 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:26 compute-0 podman[258246]: 2025-11-24 22:41:26.510009101 +0000 UTC m=+0.073818613 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:41:27 compute-0 sshd-session[258244]: Disconnecting authenticating user sync 185.217.1.246 port 46300: Change of username or service not allowed: (sync,ssh-connection) -> (sftp,ssh-connection) [preauth]
Nov 24 22:41:27 compute-0 nova_compute[189608]: 2025-11-24 22:41:27.877 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:29 compute-0 nova_compute[189608]: 2025-11-24 22:41:29.477 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:29 compute-0 podman[203795]: time="2025-11-24T22:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:41:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:41:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 24 22:41:31 compute-0 openstack_network_exporter[205945]: ERROR   22:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:41:31 compute-0 openstack_network_exporter[205945]: ERROR   22:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:41:31 compute-0 openstack_network_exporter[205945]: ERROR   22:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:41:31 compute-0 openstack_network_exporter[205945]: ERROR   22:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:41:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:41:31 compute-0 openstack_network_exporter[205945]: ERROR   22:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:41:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:41:31 compute-0 podman[258273]: 2025-11-24 22:41:31.55886412 +0000 UTC m=+0.114798081 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 24 22:41:31 compute-0 podman[258272]: 2025-11-24 22:41:31.562982718 +0000 UTC m=+0.113229502 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 22:41:32 compute-0 sshd-session[258270]: Invalid user sftp from 185.217.1.246 port 10412
Nov 24 22:41:32 compute-0 nova_compute[189608]: 2025-11-24 22:41:32.882 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:33 compute-0 sshd-session[258270]: Disconnecting invalid user sftp 185.217.1.246 port 10412: Change of username or service not allowed: (sftp,ssh-connection) -> (github,ssh-connection) [preauth]
Nov 24 22:41:34 compute-0 nova_compute[189608]: 2025-11-24 22:41:34.483 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:37 compute-0 sshd-session[258309]: Invalid user github from 185.217.1.246 port 27720
Nov 24 22:41:37 compute-0 nova_compute[189608]: 2025-11-24 22:41:37.813 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:41:37 compute-0 nova_compute[189608]: 2025-11-24 22:41:37.847 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:41:37 compute-0 nova_compute[189608]: 2025-11-24 22:41:37.848 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:41:37 compute-0 nova_compute[189608]: 2025-11-24 22:41:37.849 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:41:37 compute-0 nova_compute[189608]: 2025-11-24 22:41:37.849 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:41:37 compute-0 nova_compute[189608]: 2025-11-24 22:41:37.886 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:37 compute-0 nova_compute[189608]: 2025-11-24 22:41:37.949 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.048 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.050 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.129 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.141 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.234 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.235 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:41:38 compute-0 sshd-session[258309]: Disconnecting invalid user github 185.217.1.246 port 27720: Change of username or service not allowed: (github,ssh-connection) -> (minima,ssh-connection) [preauth]
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.299 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.724 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.725 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4834MB free_disk=72.06933212280273GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.725 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.726 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.829 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.829 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 715e08a7-7174-4e14-a83d-67aab18333d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.829 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.829 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.905 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.921 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.923 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:41:38 compute-0 nova_compute[189608]: 2025-11-24 22:41:38.923 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:41:39 compute-0 nova_compute[189608]: 2025-11-24 22:41:39.486 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:41 compute-0 nova_compute[189608]: 2025-11-24 22:41:41.899 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:41:41 compute-0 nova_compute[189608]: 2025-11-24 22:41:41.922 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:41:41 compute-0 nova_compute[189608]: 2025-11-24 22:41:41.923 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:41:42 compute-0 nova_compute[189608]: 2025-11-24 22:41:42.315 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:41:42 compute-0 nova_compute[189608]: 2025-11-24 22:41:42.316 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:41:42 compute-0 nova_compute[189608]: 2025-11-24 22:41:42.316 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:41:42 compute-0 podman[258326]: 2025-11-24 22:41:42.570609315 +0000 UTC m=+0.107883965 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container)
Nov 24 22:41:42 compute-0 podman[258325]: 2025-11-24 22:41:42.590115394 +0000 UTC m=+0.127610601 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2024-09-18T21:23:30, release-0.7.12=, vcs-type=git, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9)
Nov 24 22:41:42 compute-0 podman[258327]: 2025-11-24 22:41:42.611240163 +0000 UTC m=+0.131495042 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 22:41:42 compute-0 nova_compute[189608]: 2025-11-24 22:41:42.889 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:42 compute-0 sshd-session[258323]: Invalid user minima from 185.217.1.246 port 59373
Nov 24 22:41:43 compute-0 nova_compute[189608]: 2025-11-24 22:41:43.812 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Updating instance_info_cache with network_info: [{"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:41:43 compute-0 nova_compute[189608]: 2025-11-24 22:41:43.829 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-715e08a7-7174-4e14-a83d-67aab18333d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:41:43 compute-0 nova_compute[189608]: 2025-11-24 22:41:43.830 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:41:44 compute-0 nova_compute[189608]: 2025-11-24 22:41:44.497 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:44 compute-0 podman[258385]: 2025-11-24 22:41:44.825987933 +0000 UTC m=+0.093370604 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 24 22:41:44 compute-0 podman[258387]: 2025-11-24 22:41:44.866680472 +0000 UTC m=+0.130657756 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:41:44 compute-0 podman[258386]: 2025-11-24 22:41:44.923274577 +0000 UTC m=+0.186042894 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 22:41:45 compute-0 sshd-session[258323]: Disconnecting invalid user minima 185.217.1.246 port 59373: Change of username or service not allowed: (minima,ssh-connection) -> (nsroot,ssh-connection) [preauth]
Nov 24 22:41:46 compute-0 nova_compute[189608]: 2025-11-24 22:41:46.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:41:47 compute-0 nova_compute[189608]: 2025-11-24 22:41:47.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:41:47 compute-0 nova_compute[189608]: 2025-11-24 22:41:47.891 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:49 compute-0 nova_compute[189608]: 2025-11-24 22:41:49.498 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:49 compute-0 nova_compute[189608]: 2025-11-24 22:41:49.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:41:51 compute-0 sshd-session[258453]: Invalid user nsroot from 185.217.1.246 port 19278
Nov 24 22:41:52 compute-0 sshd-session[258453]: Disconnecting invalid user nsroot 185.217.1.246 port 19278: Change of username or service not allowed: (nsroot,ssh-connection) -> (ubuntu,ssh-connection) [preauth]
Nov 24 22:41:52 compute-0 nova_compute[189608]: 2025-11-24 22:41:52.893 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:53 compute-0 nova_compute[189608]: 2025-11-24 22:41:53.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:41:53 compute-0 nova_compute[189608]: 2025-11-24 22:41:53.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:41:53 compute-0 nova_compute[189608]: 2025-11-24 22:41:53.796 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:41:53 compute-0 nova_compute[189608]: 2025-11-24 22:41:53.797 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:41:54 compute-0 nova_compute[189608]: 2025-11-24 22:41:54.501 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:41:54.607 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:41:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:41:54.608 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:41:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:41:54.609 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:41:54 compute-0 nova_compute[189608]: 2025-11-24 22:41:54.797 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:41:57 compute-0 podman[258458]: 2025-11-24 22:41:57.538262526 +0000 UTC m=+0.088819401 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:41:57 compute-0 nova_compute[189608]: 2025-11-24 22:41:57.899 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:58 compute-0 sshd-session[258456]: Invalid user ubuntu from 185.217.1.246 port 45852
Nov 24 22:41:59 compute-0 nova_compute[189608]: 2025-11-24 22:41:59.502 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:41:59 compute-0 podman[203795]: time="2025-11-24T22:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:41:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:41:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 24 22:42:01 compute-0 openstack_network_exporter[205945]: ERROR   22:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:42:01 compute-0 openstack_network_exporter[205945]: ERROR   22:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:42:01 compute-0 openstack_network_exporter[205945]: ERROR   22:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:42:01 compute-0 openstack_network_exporter[205945]: ERROR   22:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:42:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:42:01 compute-0 openstack_network_exporter[205945]: ERROR   22:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:42:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:42:02 compute-0 podman[258484]: 2025-11-24 22:42:02.593217446 +0000 UTC m=+0.138203661 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 24 22:42:02 compute-0 podman[258483]: 2025-11-24 22:42:02.599047578 +0000 UTC m=+0.142240027 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 22:42:02 compute-0 nova_compute[189608]: 2025-11-24 22:42:02.903 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:03 compute-0 sshd-session[258456]: Disconnecting invalid user ubuntu 185.217.1.246 port 45852: Change of username or service not allowed: (ubuntu,ssh-connection) -> (support1,ssh-connection) [preauth]
Nov 24 22:42:04 compute-0 nova_compute[189608]: 2025-11-24 22:42:04.504 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:07 compute-0 nova_compute[189608]: 2025-11-24 22:42:07.906 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:09 compute-0 nova_compute[189608]: 2025-11-24 22:42:09.509 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:10 compute-0 sshd-session[258521]: Invalid user support1 from 185.217.1.246 port 24074
Nov 24 22:42:11 compute-0 sshd-session[258521]: Disconnecting invalid user support1 185.217.1.246 port 24074: Change of username or service not allowed: (support1,ssh-connection) -> (smb,ssh-connection) [preauth]
Nov 24 22:42:12 compute-0 nova_compute[189608]: 2025-11-24 22:42:12.908 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:13 compute-0 podman[258525]: 2025-11-24 22:42:13.530685598 +0000 UTC m=+0.075429634 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., version=9.6, name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 24 22:42:13 compute-0 podman[258524]: 2025-11-24 22:42:13.550697952 +0000 UTC m=+0.093196897 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, config_id=edpm, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.tags=base rhel9, architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 24 22:42:13 compute-0 podman[258526]: 2025-11-24 22:42:13.568166917 +0000 UTC m=+0.112160449 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 24 22:42:14 compute-0 nova_compute[189608]: 2025-11-24 22:42:14.512 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:15 compute-0 sshd-session[258523]: Invalid user smb from 185.217.1.246 port 55348
Nov 24 22:42:15 compute-0 podman[258585]: 2025-11-24 22:42:15.319888896 +0000 UTC m=+0.083963820 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:42:15 compute-0 podman[258587]: 2025-11-24 22:42:15.357573132 +0000 UTC m=+0.113899023 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:42:15 compute-0 podman[258586]: 2025-11-24 22:42:15.404577218 +0000 UTC m=+0.160805016 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:42:16 compute-0 sshd-session[258523]: Disconnecting invalid user smb 185.217.1.246 port 55348: Change of username or service not allowed: (smb,ssh-connection) -> (vtiger,ssh-connection) [preauth]
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.637 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.638 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.643 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.643 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.644 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.644 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.645 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.646 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.646 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.646 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.647 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a3bee9ba-6618-44bd-a443-da9fff6862a9', 'name': 'te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4a6957a775da42c9b535753d6b0279d6', 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'hostId': '81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b', 'status': 'active', 'metadata': {'metering.server_group': 'c6477657-e9b0-476c-83b3-9dc474e946c6'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.647 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.648 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.648 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.648 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.649 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.649 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.650 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.650 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.652 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '715e08a7-7174-4e14-a83d-67aab18333d8', 'name': 'te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo', 'flavor': {'id': 'a49f1e6c-1051-4dea-812e-0063121444a0', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ea88776c-3c0b-4e74-99b4-08aadc81390f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4a6957a775da42c9b535753d6b0279d6', 'user_id': 'fcf527fb124b42b9ab6a20cc0938b39f', 'hostId': '81545f88e9d372ef5a81fb4c58a4232a2d7e6782c7ccb5b924a3030b', 'status': 'active', 'metadata': {'metering.server_group': 'c6477657-e9b0-476c-83b3-9dc474e946c6'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.652 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.652 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.653 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-24T22:42:17.653061) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.659 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.663 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.664 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.664 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.664 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.664 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.664 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.665 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.665 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.665 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.665 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.666 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.666 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.666 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.666 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-24T22:42:17.665069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-24T22:42:17.666215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.685 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.686 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.706 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.707 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.707 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.707 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.707 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.707 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.708 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-24T22:42:17.708021) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.758 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 30194176 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.759 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.817 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.817 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.818 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.818 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.818 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.818 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.818 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 1080868082 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.819 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.latency volume: 103540431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.819 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.latency volume: 1147589020 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.819 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.latency volume: 75679092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.819 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.820 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.820 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.820 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.820 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.820 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-24T22:42:17.818810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.820 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-24T22:42:17.820386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.847 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/cpu volume: 338190000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.880 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/cpu volume: 336570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.881 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.881 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.881 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.881 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.881 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.881 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.882 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-24T22:42:17.881538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.881 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 1088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.882 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.882 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.882 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.883 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.883 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.883 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.883 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.883 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.883 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.883 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.884 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.884 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.884 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.884 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.884 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.885 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-24T22:42:17.883594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.885 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.885 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.885 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.885 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 73207808 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.885 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.885 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.bytes volume: 73183232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.886 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.886 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.887 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-24T22:42:17.885472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.887 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.887 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.887 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.887 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.887 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 4167302189 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.887 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.887 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.latency volume: 3127398948 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.888 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-24T22:42:17.887469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.888 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.888 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.889 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.889 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.889 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.889 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.889 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.889 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.889 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.requests volume: 344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.889 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.890 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.890 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-24T22:42:17.889216) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.890 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.890 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.890 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.891 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.891 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.891 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.891 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.891 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.891 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.891 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.892 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.892 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-24T22:42:17.890989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.892 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-24T22:42:17.892074) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.892 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.892 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.893 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.893 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.893 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.893 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.894 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.894 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.894 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-24T22:42:17.893486) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.895 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.895 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-24T22:42:17.894826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-24T22:42:17.895727) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.896 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.896 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.896 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.896 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.896 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.897 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.897 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.897 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-24T22:42:17.897253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.897 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.897 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.898 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.898 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.898 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-24T22:42:17.898657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.898 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.899 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.899 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.899 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.900 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.900 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.900 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.900 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.901 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.901 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.901 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.901 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.901 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.901 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.901 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.902 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.902 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/memory.usage volume: 42.2265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-24T22:42:17.899984) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-24T22:42:17.900986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-24T22:42:17.902040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.902 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/memory.usage volume: 42.42578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.903 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.903 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.903 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.903 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.903 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.903 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.904 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.904 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.904 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-24T22:42:17.903603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-24T22:42:17.904949) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.905 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.905 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.905 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.905 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.906 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.906 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.906 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.906 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.906 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.906 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.906 14 DEBUG ceilometer.compute.pollsters [-] a3bee9ba-6618-44bd-a443-da9fff6862a9/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.906 14 DEBUG ceilometer.compute.pollsters [-] 715e08a7-7174-4e14-a83d-67aab18333d8/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.907 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-24T22:42:17.906682) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.907 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.908 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.908 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.909 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.909 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.909 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.909 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.913 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.913 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.913 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.914 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.914 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.914 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:17 compute-0 nova_compute[189608]: 2025-11-24 22:42:17.913 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:42:17.914 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:42:19 compute-0 nova_compute[189608]: 2025-11-24 22:42:19.514 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:22 compute-0 nova_compute[189608]: 2025-11-24 22:42:22.916 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:23 compute-0 sshd-session[258655]: Invalid user sol from 45.148.10.240 port 43546
Nov 24 22:42:23 compute-0 sshd-session[258655]: Connection closed by invalid user sol 45.148.10.240 port 43546 [preauth]
Nov 24 22:42:23 compute-0 sshd-session[258653]: Invalid user vtiger from 185.217.1.246 port 14511
Nov 24 22:42:24 compute-0 sshd-session[258653]: Disconnecting invalid user vtiger 185.217.1.246 port 14511: Change of username or service not allowed: (vtiger,ssh-connection) -> (alan,ssh-connection) [preauth]
Nov 24 22:42:24 compute-0 nova_compute[189608]: 2025-11-24 22:42:24.517 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:27 compute-0 nova_compute[189608]: 2025-11-24 22:42:27.921 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:28 compute-0 podman[258659]: 2025-11-24 22:42:28.547205764 +0000 UTC m=+0.091571378 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:42:29 compute-0 sshd-session[258657]: Invalid user alan from 185.217.1.246 port 34754
Nov 24 22:42:29 compute-0 nova_compute[189608]: 2025-11-24 22:42:29.520 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:29 compute-0 sshd-session[258657]: Disconnecting invalid user alan 185.217.1.246 port 34754: Change of username or service not allowed: (alan,ssh-connection) -> (mysql,ssh-connection) [preauth]
Nov 24 22:42:29 compute-0 podman[203795]: time="2025-11-24T22:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:42:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:42:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 24 22:42:31 compute-0 openstack_network_exporter[205945]: ERROR   22:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:42:31 compute-0 openstack_network_exporter[205945]: ERROR   22:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:42:31 compute-0 openstack_network_exporter[205945]: ERROR   22:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:42:31 compute-0 openstack_network_exporter[205945]: ERROR   22:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:42:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:42:31 compute-0 openstack_network_exporter[205945]: ERROR   22:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:42:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:42:32 compute-0 nova_compute[189608]: 2025-11-24 22:42:32.925 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:33 compute-0 podman[258685]: 2025-11-24 22:42:33.556869964 +0000 UTC m=+0.099576557 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 24 22:42:33 compute-0 podman[258686]: 2025-11-24 22:42:33.557782382 +0000 UTC m=+0.090892876 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 24 22:42:34 compute-0 nova_compute[189608]: 2025-11-24 22:42:34.526 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:35 compute-0 sshd-session[258683]: Invalid user mysql from 185.217.1.246 port 60936
Nov 24 22:42:36 compute-0 sshd-session[258683]: Disconnecting invalid user mysql 185.217.1.246 port 60936: Change of username or service not allowed: (mysql,ssh-connection) -> (oracle,ssh-connection) [preauth]
Nov 24 22:42:37 compute-0 nova_compute[189608]: 2025-11-24 22:42:37.928 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:39 compute-0 nova_compute[189608]: 2025-11-24 22:42:39.529 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:39 compute-0 nova_compute[189608]: 2025-11-24 22:42:39.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:42:39 compute-0 nova_compute[189608]: 2025-11-24 22:42:39.829 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:42:39 compute-0 nova_compute[189608]: 2025-11-24 22:42:39.829 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:42:39 compute-0 nova_compute[189608]: 2025-11-24 22:42:39.830 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:42:39 compute-0 nova_compute[189608]: 2025-11-24 22:42:39.831 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:42:39 compute-0 nova_compute[189608]: 2025-11-24 22:42:39.929 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.027 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.030 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.090 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.099 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.159 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.160 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.225 189613 DEBUG oslo_concurrency.processutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.598 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.599 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4833MB free_disk=72.06803131103516GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.600 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.600 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.698 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance a3bee9ba-6618-44bd-a443-da9fff6862a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.699 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Instance 715e08a7-7174-4e14-a83d-67aab18333d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.699 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.699 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.753 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.771 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.774 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:42:40 compute-0 nova_compute[189608]: 2025-11-24 22:42:40.774 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:42:42 compute-0 sshd-session[258721]: Invalid user oracle from 185.217.1.246 port 28313
Nov 24 22:42:42 compute-0 nova_compute[189608]: 2025-11-24 22:42:42.777 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:42:42 compute-0 nova_compute[189608]: 2025-11-24 22:42:42.777 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:42:42 compute-0 nova_compute[189608]: 2025-11-24 22:42:42.778 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:42:42 compute-0 nova_compute[189608]: 2025-11-24 22:42:42.932 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:43 compute-0 nova_compute[189608]: 2025-11-24 22:42:43.376 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 22:42:43 compute-0 nova_compute[189608]: 2025-11-24 22:42:43.377 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquired lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 22:42:43 compute-0 nova_compute[189608]: 2025-11-24 22:42:43.377 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 22:42:43 compute-0 nova_compute[189608]: 2025-11-24 22:42:43.378 189613 DEBUG nova.objects.instance [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lazy-loading 'info_cache' on Instance uuid a3bee9ba-6618-44bd-a443-da9fff6862a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:42:44 compute-0 nova_compute[189608]: 2025-11-24 22:42:44.533 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:44 compute-0 podman[258735]: 2025-11-24 22:42:44.539000238 +0000 UTC m=+0.089471532 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release=1214.1726694543, release-0.7.12=, container_name=kepler, io.buildah.version=1.29.0, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, architecture=x86_64, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Nov 24 22:42:44 compute-0 podman[258736]: 2025-11-24 22:42:44.581344479 +0000 UTC m=+0.111689855 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public)
Nov 24 22:42:44 compute-0 podman[258742]: 2025-11-24 22:42:44.592837627 +0000 UTC m=+0.115411711 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Nov 24 22:42:44 compute-0 nova_compute[189608]: 2025-11-24 22:42:44.967 189613 DEBUG nova.network.neutron [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updating instance_info_cache with network_info: [{"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:42:44 compute-0 nova_compute[189608]: 2025-11-24 22:42:44.980 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Releasing lock "refresh_cache-a3bee9ba-6618-44bd-a443-da9fff6862a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 22:42:44 compute-0 nova_compute[189608]: 2025-11-24 22:42:44.980 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 22:42:45 compute-0 podman[258790]: 2025-11-24 22:42:45.564679609 +0000 UTC m=+0.099977279 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 22:42:45 compute-0 podman[258788]: 2025-11-24 22:42:45.57495429 +0000 UTC m=+0.119476578 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 24 22:42:45 compute-0 podman[258789]: 2025-11-24 22:42:45.593506529 +0000 UTC m=+0.133022331 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 24 22:42:47 compute-0 sshd-session[258721]: error: maximum authentication attempts exceeded for invalid user oracle from 185.217.1.246 port 28313 ssh2 [preauth]
Nov 24 22:42:47 compute-0 sshd-session[258721]: Disconnecting invalid user oracle 185.217.1.246 port 28313: Too many authentication failures [preauth]
Nov 24 22:42:47 compute-0 nova_compute[189608]: 2025-11-24 22:42:47.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:42:47 compute-0 nova_compute[189608]: 2025-11-24 22:42:47.936 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:48 compute-0 nova_compute[189608]: 2025-11-24 22:42:48.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:42:49 compute-0 nova_compute[189608]: 2025-11-24 22:42:49.534 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:51 compute-0 sshd-session[258855]: Invalid user oracle from 185.217.1.246 port 4054
Nov 24 22:42:51 compute-0 nova_compute[189608]: 2025-11-24 22:42:51.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:42:52 compute-0 sshd-session[258855]: Disconnecting invalid user oracle 185.217.1.246 port 4054: Change of username or service not allowed: (oracle,ssh-connection) -> (wuhan,ssh-connection) [preauth]
Nov 24 22:42:52 compute-0 nova_compute[189608]: 2025-11-24 22:42:52.941 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:53 compute-0 nova_compute[189608]: 2025-11-24 22:42:53.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:42:54 compute-0 nova_compute[189608]: 2025-11-24 22:42:54.540 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:42:54.608 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:42:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:42:54.609 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:42:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:42:54.610 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:42:54 compute-0 nova_compute[189608]: 2025-11-24 22:42:54.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:42:54 compute-0 nova_compute[189608]: 2025-11-24 22:42:54.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:42:54 compute-0 nova_compute[189608]: 2025-11-24 22:42:54.793 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:42:55 compute-0 sshd-session[258858]: Invalid user wuhan from 185.217.1.246 port 14064
Nov 24 22:42:55 compute-0 sshd-session[258858]: Disconnecting invalid user wuhan 185.217.1.246 port 14064: Change of username or service not allowed: (wuhan,ssh-connection) -> (Admin,ssh-connection) [preauth]
Nov 24 22:42:56 compute-0 nova_compute[189608]: 2025-11-24 22:42:56.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:42:57 compute-0 nova_compute[189608]: 2025-11-24 22:42:57.947 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:59 compute-0 podman[258862]: 2025-11-24 22:42:59.523767548 +0000 UTC m=+0.079447929 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:42:59 compute-0 nova_compute[189608]: 2025-11-24 22:42:59.542 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:42:59 compute-0 podman[203795]: time="2025-11-24T22:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:42:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Nov 24 22:42:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 24 22:42:59 compute-0 sshd-session[258860]: Invalid user Admin from 185.217.1.246 port 33220
Nov 24 22:43:01 compute-0 openstack_network_exporter[205945]: ERROR   22:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:43:01 compute-0 openstack_network_exporter[205945]: ERROR   22:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:43:01 compute-0 openstack_network_exporter[205945]: ERROR   22:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:43:01 compute-0 openstack_network_exporter[205945]: ERROR   22:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:43:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:43:01 compute-0 openstack_network_exporter[205945]: ERROR   22:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:43:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:43:02 compute-0 nova_compute[189608]: 2025-11-24 22:43:02.951 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:04 compute-0 podman[258884]: 2025-11-24 22:43:04.518381715 +0000 UTC m=+0.075574228 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 22:43:04 compute-0 podman[258885]: 2025-11-24 22:43:04.524715473 +0000 UTC m=+0.078137879 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm)
Nov 24 22:43:04 compute-0 nova_compute[189608]: 2025-11-24 22:43:04.546 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:06 compute-0 sshd-session[258860]: Disconnecting invalid user Admin 185.217.1.246 port 33220: Change of username or service not allowed: (Admin,ssh-connection) -> (download,ssh-connection) [preauth]
Nov 24 22:43:07 compute-0 nova_compute[189608]: 2025-11-24 22:43:07.957 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:09 compute-0 nova_compute[189608]: 2025-11-24 22:43:09.552 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:12 compute-0 nova_compute[189608]: 2025-11-24 22:43:12.961 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:14 compute-0 sshd-session[258921]: Invalid user download from 185.217.1.246 port 19757
Nov 24 22:43:14 compute-0 nova_compute[189608]: 2025-11-24 22:43:14.560 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:14 compute-0 podman[258924]: 2025-11-24 22:43:14.783054493 +0000 UTC m=+0.087767168 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Nov 24 22:43:14 compute-0 podman[258925]: 2025-11-24 22:43:14.78711144 +0000 UTC m=+0.083942710 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Nov 24 22:43:14 compute-0 podman[258923]: 2025-11-24 22:43:14.789746652 +0000 UTC m=+0.098405401 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, release=1214.1726694543, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 24 22:43:15 compute-0 sshd-session[258921]: Disconnecting invalid user download 185.217.1.246 port 19757: Change of username or service not allowed: (download,ssh-connection) -> (log,ssh-connection) [preauth]
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.023 189613 DEBUG oslo_concurrency.lockutils [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "a3bee9ba-6618-44bd-a443-da9fff6862a9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.023 189613 DEBUG oslo_concurrency.lockutils [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.024 189613 DEBUG oslo_concurrency.lockutils [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.024 189613 DEBUG oslo_concurrency.lockutils [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.025 189613 DEBUG oslo_concurrency.lockutils [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.026 189613 INFO nova.compute.manager [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Terminating instance
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.028 189613 DEBUG nova.compute.manager [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:43:16 compute-0 kernel: tap5efccbc3-b2 (unregistering): left promiscuous mode
Nov 24 22:43:16 compute-0 NetworkManager[56413]: <info>  [1764024196.0656] device (tap5efccbc3-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:43:16 compute-0 ovn_controller[97889]: 2025-11-24T22:43:16Z|00188|binding|INFO|Releasing lport 5efccbc3-b2bb-4d9d-ba64-9382a4b2487b from this chassis (sb_readonly=0)
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.069 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:16 compute-0 ovn_controller[97889]: 2025-11-24T22:43:16Z|00189|binding|INFO|Setting lport 5efccbc3-b2bb-4d9d-ba64-9382a4b2487b down in Southbound
Nov 24 22:43:16 compute-0 ovn_controller[97889]: 2025-11-24T22:43:16Z|00190|binding|INFO|Removing iface tap5efccbc3-b2 ovn-installed in OVS
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.072 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.080 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:6c:bb 10.100.0.88'], port_security=['fa:16:3e:40:6c:bb 10.100.0.88'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.88/16', 'neutron:device_id': 'a3bee9ba-6618-44bd-a443-da9fff6862a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a6957a775da42c9b535753d6b0279d6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24045f91-3265-40cf-b7b6-d2589223975b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3186cdf0-d894-4e3e-a84d-b369c1fcfb08, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=5efccbc3-b2bb-4d9d-ba64-9382a4b2487b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.081 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 5efccbc3-b2bb-4d9d-ba64-9382a4b2487b in datapath a164481b-21c8-4cae-a6e9-b470d8a55a1f unbound from our chassis
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.082 106776 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a164481b-21c8-4cae-a6e9-b470d8a55a1f
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.097 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[80a00fc7-8222-43a1-b78e-0a073e843bd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.111 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:16 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 24 22:43:16 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 7min 11.034s CPU time.
Nov 24 22:43:16 compute-0 systemd-machined[155884]: Machine qemu-11-instance-0000000b terminated.
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.130 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[e94233f3-a8f6-46fa-8063-6c5919b8430f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.134 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a35196-0ac3-4f84-bd2e-374d230e48fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.168 240041 DEBUG oslo.privsep.daemon [-] privsep: reply[0e390046-1dcd-4cc4-a28a-54149162a0e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:16 compute-0 podman[258984]: 2025-11-24 22:43:16.183422032 +0000 UTC m=+0.089935926 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.186 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[86c02af7-e97d-4e0e-ba3b-071aeac14ca7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa164481b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:a0:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526439, 'reachable_time': 22056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259046, 'error': None, 'target': 'ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:16 compute-0 podman[258986]: 2025-11-24 22:43:16.198241714 +0000 UTC m=+0.099548496 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.202 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[ff851d9f-3592-44b9-8e0d-0c0c66c83525]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapa164481b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526458, 'tstamp': 526458}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259055, 'error': None, 'target': 'ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa164481b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526464, 'tstamp': 526464}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259055, 'error': None, 'target': 'ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.204 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa164481b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.206 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.212 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.213 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa164481b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.213 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.214 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa164481b-20, col_values=(('external_ids', {'iface-id': 'ce3870c0-48db-470b-8d5d-479134c9b554'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.214 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 22:43:16 compute-0 podman[258985]: 2025-11-24 22:43:16.218316881 +0000 UTC m=+0.124366841 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.296 189613 INFO nova.virt.libvirt.driver [-] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Instance destroyed successfully.
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.296 189613 DEBUG nova.objects.instance [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lazy-loading 'resources' on Instance uuid a3bee9ba-6618-44bd-a443-da9fff6862a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.317 189613 DEBUG nova.virt.libvirt.vif [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:30:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0491564-asg-haouulsup5dm-5bjba57zh6x7-pvowny27behx',id=11,image_ref='ea88776c-3c0b-4e74-99b4-08aadc81390f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:30:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='c6477657-e9b0-476c-83b3-9dc474e946c6'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4a6957a775da42c9b535753d6b0279d6',ramdisk_id='',reservation_id='r-nqqdpcc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ea88776c-3c0b-4e74-99b4-08aadc81390f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-332462970',owner_user_name='tempest-PrometheusGabbiTest-332462970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:30:23Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='fcf527fb124b42b9ab6a20cc0938b39f',uuid=a3bee9ba-6618-44bd-a443-da9fff6862a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.317 189613 DEBUG nova.network.os_vif_util [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Converting VIF {"id": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "address": "fa:16:3e:40:6c:bb", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.88", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5efccbc3-b2", "ovs_interfaceid": "5efccbc3-b2bb-4d9d-ba64-9382a4b2487b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.318 189613 DEBUG nova.network.os_vif_util [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:6c:bb,bridge_name='br-int',has_traffic_filtering=True,id=5efccbc3-b2bb-4d9d-ba64-9382a4b2487b,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5efccbc3-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.318 189613 DEBUG os_vif [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:6c:bb,bridge_name='br-int',has_traffic_filtering=True,id=5efccbc3-b2bb-4d9d-ba64-9382a4b2487b,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5efccbc3-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.320 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.320 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5efccbc3-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.322 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.324 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.326 189613 INFO os_vif [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:6c:bb,bridge_name='br-int',has_traffic_filtering=True,id=5efccbc3-b2bb-4d9d-ba64-9382a4b2487b,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5efccbc3-b2')
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.326 189613 INFO nova.virt.libvirt.driver [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Deleting instance files /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9_del
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.327 189613 INFO nova.virt.libvirt.driver [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Deletion of /var/lib/nova/instances/a3bee9ba-6618-44bd-a443-da9fff6862a9_del complete
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.397 189613 INFO nova.compute.manager [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Took 0.37 seconds to destroy the instance on the hypervisor.
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.398 189613 DEBUG oslo.service.loopingcall [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.398 189613 DEBUG nova.compute.manager [-] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.398 189613 DEBUG nova.network.neutron [-] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.403 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7e:33:1a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7e:8a:65:da:aa:a2'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:43:16 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:16.404 106776 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.403 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.530 189613 DEBUG nova.compute.manager [req-06e40327-de34-44ed-8ba5-48918d6f8f23 req-f72e5851-8a97-4889-b881-d53544a3411f c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Received event network-vif-unplugged-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.530 189613 DEBUG oslo_concurrency.lockutils [req-06e40327-de34-44ed-8ba5-48918d6f8f23 req-f72e5851-8a97-4889-b881-d53544a3411f c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.531 189613 DEBUG oslo_concurrency.lockutils [req-06e40327-de34-44ed-8ba5-48918d6f8f23 req-f72e5851-8a97-4889-b881-d53544a3411f c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.531 189613 DEBUG oslo_concurrency.lockutils [req-06e40327-de34-44ed-8ba5-48918d6f8f23 req-f72e5851-8a97-4889-b881-d53544a3411f c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.531 189613 DEBUG nova.compute.manager [req-06e40327-de34-44ed-8ba5-48918d6f8f23 req-f72e5851-8a97-4889-b881-d53544a3411f c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] No waiting events found dispatching network-vif-unplugged-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:43:16 compute-0 nova_compute[189608]: 2025-11-24 22:43:16.531 189613 DEBUG nova.compute.manager [req-06e40327-de34-44ed-8ba5-48918d6f8f23 req-f72e5851-8a97-4889-b881-d53544a3411f c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Received event network-vif-unplugged-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.754 189613 DEBUG nova.network.neutron [-] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.769 189613 DEBUG nova.compute.manager [req-6e8bb3e6-6f38-426c-bb57-be3af8be4a11 req-eeb8f6a1-a308-428e-8294-39f5bbce44a8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Received event network-vif-plugged-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.770 189613 DEBUG oslo_concurrency.lockutils [req-6e8bb3e6-6f38-426c-bb57-be3af8be4a11 req-eeb8f6a1-a308-428e-8294-39f5bbce44a8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.770 189613 DEBUG oslo_concurrency.lockutils [req-6e8bb3e6-6f38-426c-bb57-be3af8be4a11 req-eeb8f6a1-a308-428e-8294-39f5bbce44a8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.771 189613 DEBUG oslo_concurrency.lockutils [req-6e8bb3e6-6f38-426c-bb57-be3af8be4a11 req-eeb8f6a1-a308-428e-8294-39f5bbce44a8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.771 189613 DEBUG nova.compute.manager [req-6e8bb3e6-6f38-426c-bb57-be3af8be4a11 req-eeb8f6a1-a308-428e-8294-39f5bbce44a8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] No waiting events found dispatching network-vif-plugged-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.772 189613 WARNING nova.compute.manager [req-6e8bb3e6-6f38-426c-bb57-be3af8be4a11 req-eeb8f6a1-a308-428e-8294-39f5bbce44a8 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Received unexpected event network-vif-plugged-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b for instance with vm_state active and task_state deleting.
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.774 189613 INFO nova.compute.manager [-] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Took 2.38 seconds to deallocate network for instance.
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.816 189613 DEBUG oslo_concurrency.lockutils [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.816 189613 DEBUG oslo_concurrency.lockutils [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.840 189613 DEBUG nova.compute.manager [req-1a67c1d4-533f-4c68-a230-42fdd31c2f27 req-17979a77-e8fc-47b8-a1fa-00ec4679ff55 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Received event network-vif-deleted-5efccbc3-b2bb-4d9d-ba64-9382a4b2487b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.915 189613 DEBUG nova.compute.provider_tree [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.926 189613 DEBUG nova.scheduler.client.report [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.966 189613 DEBUG oslo_concurrency.lockutils [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:18 compute-0 nova_compute[189608]: 2025-11-24 22:43:18.992 189613 INFO nova.scheduler.client.report [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Deleted allocations for instance a3bee9ba-6618-44bd-a443-da9fff6862a9
Nov 24 22:43:19 compute-0 nova_compute[189608]: 2025-11-24 22:43:19.070 189613 DEBUG oslo_concurrency.lockutils [None req-3279dc8c-d3cb-4ffb-a250-5b9fc882b2f6 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "a3bee9ba-6618-44bd-a443-da9fff6862a9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:19 compute-0 nova_compute[189608]: 2025-11-24 22:43:19.562 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:21 compute-0 sshd-session[259081]: Invalid user validator from 193.32.162.145 port 59338
Nov 24 22:43:21 compute-0 nova_compute[189608]: 2025-11-24 22:43:21.323 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:21 compute-0 sshd-session[259081]: Connection closed by invalid user validator 193.32.162.145 port 59338 [preauth]
Nov 24 22:43:21 compute-0 sshd-session[259078]: Invalid user log from 185.217.1.246 port 45156
Nov 24 22:43:22 compute-0 sshd-session[259078]: Disconnecting invalid user log 185.217.1.246 port 45156: Change of username or service not allowed: (log,ssh-connection) -> (zxcloudsetup,ssh-connection) [preauth]
Nov 24 22:43:24 compute-0 nova_compute[189608]: 2025-11-24 22:43:24.566 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:26 compute-0 nova_compute[189608]: 2025-11-24 22:43:26.326 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:26 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:26.405 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2f80616-70e9-484c-836d-1edab81fe5d9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:43:27 compute-0 sshd-session[259083]: Invalid user zxcloudsetup from 185.217.1.246 port 2984
Nov 24 22:43:28 compute-0 sshd-session[259083]: Disconnecting invalid user zxcloudsetup 185.217.1.246 port 2984: Change of username or service not allowed: (zxcloudsetup,ssh-connection) -> (onlime_r,ssh-connection [preauth]
Nov 24 22:43:28 compute-0 nova_compute[189608]: 2025-11-24 22:43:28.820 189613 DEBUG oslo_concurrency.lockutils [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "715e08a7-7174-4e14-a83d-67aab18333d8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:28 compute-0 nova_compute[189608]: 2025-11-24 22:43:28.821 189613 DEBUG oslo_concurrency.lockutils [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:28 compute-0 nova_compute[189608]: 2025-11-24 22:43:28.821 189613 DEBUG oslo_concurrency.lockutils [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:28 compute-0 nova_compute[189608]: 2025-11-24 22:43:28.822 189613 DEBUG oslo_concurrency.lockutils [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:28 compute-0 nova_compute[189608]: 2025-11-24 22:43:28.822 189613 DEBUG oslo_concurrency.lockutils [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:28 compute-0 nova_compute[189608]: 2025-11-24 22:43:28.824 189613 INFO nova.compute.manager [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Terminating instance
Nov 24 22:43:28 compute-0 nova_compute[189608]: 2025-11-24 22:43:28.826 189613 DEBUG nova.compute.manager [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 22:43:28 compute-0 kernel: tap9d8978ca-0c (unregistering): left promiscuous mode
Nov 24 22:43:28 compute-0 NetworkManager[56413]: <info>  [1764024208.8622] device (tap9d8978ca-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 22:43:28 compute-0 ovn_controller[97889]: 2025-11-24T22:43:28Z|00191|binding|INFO|Releasing lport 9d8978ca-0c88-4b94-bebb-cca47795447e from this chassis (sb_readonly=0)
Nov 24 22:43:28 compute-0 ovn_controller[97889]: 2025-11-24T22:43:28Z|00192|binding|INFO|Setting lport 9d8978ca-0c88-4b94-bebb-cca47795447e down in Southbound
Nov 24 22:43:28 compute-0 nova_compute[189608]: 2025-11-24 22:43:28.883 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:28 compute-0 ovn_controller[97889]: 2025-11-24T22:43:28Z|00193|binding|INFO|Removing iface tap9d8978ca-0c ovn-installed in OVS
Nov 24 22:43:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:28.892 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:93:b1 10.100.0.203'], port_security=['fa:16:3e:be:93:b1 10.100.0.203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.203/16', 'neutron:device_id': '715e08a7-7174-4e14-a83d-67aab18333d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a6957a775da42c9b535753d6b0279d6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24045f91-3265-40cf-b7b6-d2589223975b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3186cdf0-d894-4e3e-a84d-b369c1fcfb08, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=9d8978ca-0c88-4b94-bebb-cca47795447e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:43:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:28.894 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 9d8978ca-0c88-4b94-bebb-cca47795447e in datapath a164481b-21c8-4cae-a6e9-b470d8a55a1f unbound from our chassis
Nov 24 22:43:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:28.896 106776 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a164481b-21c8-4cae-a6e9-b470d8a55a1f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 22:43:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:28.898 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[54ef9824-f2c3-4ebb-9fcb-97ece9fa14bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:28 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:28.899 106776 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f namespace which is not needed anymore
Nov 24 22:43:28 compute-0 nova_compute[189608]: 2025-11-24 22:43:28.915 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:28 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 24 22:43:28 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 6min 47.549s CPU time.
Nov 24 22:43:28 compute-0 systemd-machined[155884]: Machine qemu-16-instance-0000000f terminated.
Nov 24 22:43:29 compute-0 kernel: tap9d8978ca-0c: entered promiscuous mode
Nov 24 22:43:29 compute-0 NetworkManager[56413]: <info>  [1764024209.0617] manager: (tap9d8978ca-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.063 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:29 compute-0 kernel: tap9d8978ca-0c (unregistering): left promiscuous mode
Nov 24 22:43:29 compute-0 ovn_controller[97889]: 2025-11-24T22:43:29Z|00194|binding|INFO|Claiming lport 9d8978ca-0c88-4b94-bebb-cca47795447e for this chassis.
Nov 24 22:43:29 compute-0 ovn_controller[97889]: 2025-11-24T22:43:29Z|00195|binding|INFO|9d8978ca-0c88-4b94-bebb-cca47795447e: Claiming fa:16:3e:be:93:b1 10.100.0.203
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.075 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:93:b1 10.100.0.203'], port_security=['fa:16:3e:be:93:b1 10.100.0.203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.203/16', 'neutron:device_id': '715e08a7-7174-4e14-a83d-67aab18333d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a6957a775da42c9b535753d6b0279d6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24045f91-3265-40cf-b7b6-d2589223975b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3186cdf0-d894-4e3e-a84d-b369c1fcfb08, chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=9d8978ca-0c88-4b94-bebb-cca47795447e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:43:29 compute-0 ovn_controller[97889]: 2025-11-24T22:43:29Z|00196|binding|INFO|Setting lport 9d8978ca-0c88-4b94-bebb-cca47795447e ovn-installed in OVS
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.100 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:29 compute-0 ovn_controller[97889]: 2025-11-24T22:43:29Z|00197|binding|INFO|Setting lport 9d8978ca-0c88-4b94-bebb-cca47795447e up in Southbound
Nov 24 22:43:29 compute-0 ovn_controller[97889]: 2025-11-24T22:43:29Z|00198|binding|INFO|Releasing lport 9d8978ca-0c88-4b94-bebb-cca47795447e from this chassis (sb_readonly=1)
Nov 24 22:43:29 compute-0 ovn_controller[97889]: 2025-11-24T22:43:29Z|00199|if_status|INFO|Not setting lport 9d8978ca-0c88-4b94-bebb-cca47795447e down as sb is readonly
Nov 24 22:43:29 compute-0 ovn_controller[97889]: 2025-11-24T22:43:29Z|00200|binding|INFO|Removing iface tap9d8978ca-0c ovn-installed in OVS
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.103 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:29 compute-0 ovn_controller[97889]: 2025-11-24T22:43:29Z|00201|binding|INFO|Releasing lport 9d8978ca-0c88-4b94-bebb-cca47795447e from this chassis (sb_readonly=0)
Nov 24 22:43:29 compute-0 ovn_controller[97889]: 2025-11-24T22:43:29Z|00202|binding|INFO|Setting lport 9d8978ca-0c88-4b94-bebb-cca47795447e down in Southbound
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.113 106776 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:93:b1 10.100.0.203'], port_security=['fa:16:3e:be:93:b1 10.100.0.203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.203/16', 'neutron:device_id': '715e08a7-7174-4e14-a83d-67aab18333d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a6957a775da42c9b535753d6b0279d6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '24045f91-3265-40cf-b7b6-d2589223975b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3186cdf0-d894-4e3e-a84d-b369c1fcfb08, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>], logical_port=9d8978ca-0c88-4b94-bebb-cca47795447e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd6e38836a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.114 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:29 compute-0 neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f[252646]: [NOTICE]   (252650) : haproxy version is 2.8.14-c23fe91
Nov 24 22:43:29 compute-0 neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f[252646]: [NOTICE]   (252650) : path to executable is /usr/sbin/haproxy
Nov 24 22:43:29 compute-0 neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f[252646]: [WARNING]  (252650) : Exiting Master process...
Nov 24 22:43:29 compute-0 neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f[252646]: [ALERT]    (252650) : Current worker (252652) exited with code 143 (Terminated)
Nov 24 22:43:29 compute-0 neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f[252646]: [WARNING]  (252650) : All workers exited. Exiting... (0)
Nov 24 22:43:29 compute-0 systemd[1]: libpod-077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93.scope: Deactivated successfully.
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.128 189613 INFO nova.virt.libvirt.driver [-] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Instance destroyed successfully.
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.128 189613 DEBUG nova.objects.instance [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lazy-loading 'resources' on Instance uuid 715e08a7-7174-4e14-a83d-67aab18333d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 22:43:29 compute-0 podman[259109]: 2025-11-24 22:43:29.130727897 +0000 UTC m=+0.080760080 container died 077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.143 189613 DEBUG nova.virt.libvirt.vif [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T22:33:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0491564-asg-haouulsup5dm-khtqnyy46ddc-wz3ip25x6yfo',id=15,image_ref='ea88776c-3c0b-4e74-99b4-08aadc81390f',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-24T22:33:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='c6477657-e9b0-476c-83b3-9dc474e946c6'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4a6957a775da42c9b535753d6b0279d6',ramdisk_id='',reservation_id='r-051484o2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ea88776c-3c0b-4e74-99b4-08aadc81390f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-332462970',owner_user_name='tempest-PrometheusGabbiTest-332462970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T22:33:24Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='fcf527fb124b42b9ab6a20cc0938b39f',uuid=715e08a7-7174-4e14-a83d-67aab18333d8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.143 189613 DEBUG nova.network.os_vif_util [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Converting VIF {"id": "9d8978ca-0c88-4b94-bebb-cca47795447e", "address": "fa:16:3e:be:93:b1", "network": {"id": "a164481b-21c8-4cae-a6e9-b470d8a55a1f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6957a775da42c9b535753d6b0279d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d8978ca-0c", "ovs_interfaceid": "9d8978ca-0c88-4b94-bebb-cca47795447e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.144 189613 DEBUG nova.network.os_vif_util [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:93:b1,bridge_name='br-int',has_traffic_filtering=True,id=9d8978ca-0c88-4b94-bebb-cca47795447e,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d8978ca-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.145 189613 DEBUG os_vif [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:93:b1,bridge_name='br-int',has_traffic_filtering=True,id=9d8978ca-0c88-4b94-bebb-cca47795447e,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d8978ca-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.147 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.148 189613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d8978ca-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.153 189613 DEBUG nova.compute.manager [req-bcd06459-6e1e-44f6-aef2-798d5a1b7285 req-04e7fa5f-9299-45a8-976a-366c477fc872 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received event network-vif-unplugged-9d8978ca-0c88-4b94-bebb-cca47795447e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.153 189613 DEBUG oslo_concurrency.lockutils [req-bcd06459-6e1e-44f6-aef2-798d5a1b7285 req-04e7fa5f-9299-45a8-976a-366c477fc872 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.154 189613 DEBUG oslo_concurrency.lockutils [req-bcd06459-6e1e-44f6-aef2-798d5a1b7285 req-04e7fa5f-9299-45a8-976a-366c477fc872 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.154 189613 DEBUG oslo_concurrency.lockutils [req-bcd06459-6e1e-44f6-aef2-798d5a1b7285 req-04e7fa5f-9299-45a8-976a-366c477fc872 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.154 189613 DEBUG nova.compute.manager [req-bcd06459-6e1e-44f6-aef2-798d5a1b7285 req-04e7fa5f-9299-45a8-976a-366c477fc872 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] No waiting events found dispatching network-vif-unplugged-9d8978ca-0c88-4b94-bebb-cca47795447e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.155 189613 DEBUG nova.compute.manager [req-bcd06459-6e1e-44f6-aef2-798d5a1b7285 req-04e7fa5f-9299-45a8-976a-366c477fc872 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received event network-vif-unplugged-9d8978ca-0c88-4b94-bebb-cca47795447e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.156 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.158 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.162 189613 INFO os_vif [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:93:b1,bridge_name='br-int',has_traffic_filtering=True,id=9d8978ca-0c88-4b94-bebb-cca47795447e,network=Network(a164481b-21c8-4cae-a6e9-b470d8a55a1f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d8978ca-0c')
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.163 189613 INFO nova.virt.libvirt.driver [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Deleting instance files /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8_del
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.165 189613 INFO nova.virt.libvirt.driver [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Deletion of /var/lib/nova/instances/715e08a7-7174-4e14-a83d-67aab18333d8_del complete
Nov 24 22:43:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93-userdata-shm.mount: Deactivated successfully.
Nov 24 22:43:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f70b734fc8588ddef69b4bd6072f4426711e6cbc50b7da65732f497ffbd8ca19-merged.mount: Deactivated successfully.
Nov 24 22:43:29 compute-0 podman[259109]: 2025-11-24 22:43:29.184298228 +0000 UTC m=+0.134330421 container cleanup 077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:43:29 compute-0 systemd[1]: libpod-conmon-077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93.scope: Deactivated successfully.
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.233 189613 INFO nova.compute.manager [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Took 0.41 seconds to destroy the instance on the hypervisor.
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.234 189613 DEBUG oslo.service.loopingcall [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.234 189613 DEBUG nova.compute.manager [-] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.235 189613 DEBUG nova.network.neutron [-] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 22:43:29 compute-0 podman[259147]: 2025-11-24 22:43:29.266317087 +0000 UTC m=+0.053639325 container remove 077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.277 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[bd91bed6-6462-4ce7-b2fd-6288fa3e8636]: (4, ('Mon Nov 24 10:43:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f (077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93)\n077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93\nMon Nov 24 10:43:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f (077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93)\n077e21c087588532b333ea30d5f4233709e12de421a1c7ab64f14d98991dcf93\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.279 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3b2aa0-42fd-422d-9282-4378592c598f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.280 106776 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa164481b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.282 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:29 compute-0 kernel: tapa164481b-20: left promiscuous mode
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.294 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.301 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[b71bc08d-872b-4f81-ba76-3e49fd97d5bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.314 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[778d86a8-8370-40b7-95e9-3fa719c7923c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.316 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[babd27df-c8e2-4d39-9088-25aca112ec6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.329 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[092af072-f15b-432d-8dde-f87b3f3fa44f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526429, 'reachable_time': 32216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259163, 'error': None, 'target': 'ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:29 compute-0 systemd[1]: run-netns-ovnmeta\x2da164481b\x2d21c8\x2d4cae\x2da6e9\x2db470d8a55a1f.mount: Deactivated successfully.
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.333 106891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a164481b-21c8-4cae-a6e9-b470d8a55a1f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.333 106891 DEBUG oslo.privsep.daemon [-] privsep: reply[a4505696-c046-446e-9f69-f9287a53e71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.335 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 9d8978ca-0c88-4b94-bebb-cca47795447e in datapath a164481b-21c8-4cae-a6e9-b470d8a55a1f unbound from our chassis
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.336 106776 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a164481b-21c8-4cae-a6e9-b470d8a55a1f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.337 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e41aaf-e502-4a14-89b8-5afdf503379b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.337 106776 INFO neutron.agent.ovn.metadata.agent [-] Port 9d8978ca-0c88-4b94-bebb-cca47795447e in datapath a164481b-21c8-4cae-a6e9-b470d8a55a1f unbound from our chassis
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.338 106776 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a164481b-21c8-4cae-a6e9-b470d8a55a1f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 22:43:29 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:29.339 240020 DEBUG oslo.privsep.daemon [-] privsep: reply[88a7e657-f778-4db6-be1f-51457fa0496a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 22:43:29 compute-0 nova_compute[189608]: 2025-11-24 22:43:29.569 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:29 compute-0 podman[203795]: time="2025-11-24T22:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:43:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:43:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Nov 24 22:43:30 compute-0 podman[259164]: 2025-11-24 22:43:30.543453353 +0000 UTC m=+0.088943966 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 24 22:43:30 compute-0 nova_compute[189608]: 2025-11-24 22:43:30.655 189613 DEBUG nova.network.neutron [-] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 22:43:30 compute-0 nova_compute[189608]: 2025-11-24 22:43:30.675 189613 INFO nova.compute.manager [-] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Took 1.44 seconds to deallocate network for instance.
Nov 24 22:43:30 compute-0 nova_compute[189608]: 2025-11-24 22:43:30.742 189613 DEBUG oslo_concurrency.lockutils [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:30 compute-0 nova_compute[189608]: 2025-11-24 22:43:30.742 189613 DEBUG oslo_concurrency.lockutils [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:30 compute-0 nova_compute[189608]: 2025-11-24 22:43:30.765 189613 DEBUG nova.compute.manager [req-29d270ff-9b12-44c9-8f9e-366fe6dfc807 req-21991e55-2712-4696-8554-453f30f5c06a c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received event network-vif-deleted-9d8978ca-0c88-4b94-bebb-cca47795447e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:43:30 compute-0 nova_compute[189608]: 2025-11-24 22:43:30.833 189613 DEBUG nova.compute.provider_tree [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:43:30 compute-0 nova_compute[189608]: 2025-11-24 22:43:30.851 189613 DEBUG nova.scheduler.client.report [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:43:30 compute-0 nova_compute[189608]: 2025-11-24 22:43:30.879 189613 DEBUG oslo_concurrency.lockutils [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:30 compute-0 nova_compute[189608]: 2025-11-24 22:43:30.915 189613 INFO nova.scheduler.client.report [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Deleted allocations for instance 715e08a7-7174-4e14-a83d-67aab18333d8
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:30.999 189613 DEBUG oslo_concurrency.lockutils [None req-4e235c7c-1c6f-4c17-919e-abf0d89c26a8 fcf527fb124b42b9ab6a20cc0938b39f 4a6957a775da42c9b535753d6b0279d6 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.294 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764024196.293119, a3bee9ba-6618-44bd-a443-da9fff6862a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.295 189613 INFO nova.compute.manager [-] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] VM Stopped (Lifecycle Event)
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.317 189613 DEBUG nova.compute.manager [None req-e9f88663-d9dd-4ddf-9167-c571018187df - - - - - -] [instance: a3bee9ba-6618-44bd-a443-da9fff6862a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.379 189613 DEBUG nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received event network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.380 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.381 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.381 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.382 189613 DEBUG nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] No waiting events found dispatching network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.383 189613 WARNING nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received unexpected event network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e for instance with vm_state deleted and task_state None.
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.383 189613 DEBUG nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received event network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.384 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.385 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.386 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.386 189613 DEBUG nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] No waiting events found dispatching network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.387 189613 WARNING nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received unexpected event network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e for instance with vm_state deleted and task_state None.
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.388 189613 DEBUG nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received event network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.389 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.389 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.390 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.391 189613 DEBUG nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] No waiting events found dispatching network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.391 189613 WARNING nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received unexpected event network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e for instance with vm_state deleted and task_state None.
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.392 189613 DEBUG nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received event network-vif-unplugged-9d8978ca-0c88-4b94-bebb-cca47795447e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.393 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.393 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.394 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.395 189613 DEBUG nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] No waiting events found dispatching network-vif-unplugged-9d8978ca-0c88-4b94-bebb-cca47795447e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.395 189613 WARNING nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received unexpected event network-vif-unplugged-9d8978ca-0c88-4b94-bebb-cca47795447e for instance with vm_state deleted and task_state None.
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.396 189613 DEBUG nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received event network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.397 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Acquiring lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.397 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.398 189613 DEBUG oslo_concurrency.lockutils [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] Lock "715e08a7-7174-4e14-a83d-67aab18333d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.399 189613 DEBUG nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] No waiting events found dispatching network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 22:43:31 compute-0 nova_compute[189608]: 2025-11-24 22:43:31.399 189613 WARNING nova.compute.manager [req-b42051a0-f9e3-4667-96c5-94fea9b74bba req-5f784787-b59e-4d0b-98a6-f029a1afa478 c8b88d9b3964477ba3ca331e2c0256df d6c8dcf77a8b48efa9896da8e81571d5 - - default default] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Received unexpected event network-vif-plugged-9d8978ca-0c88-4b94-bebb-cca47795447e for instance with vm_state deleted and task_state None.
Nov 24 22:43:31 compute-0 openstack_network_exporter[205945]: ERROR   22:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:43:31 compute-0 openstack_network_exporter[205945]: ERROR   22:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:43:31 compute-0 openstack_network_exporter[205945]: ERROR   22:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:43:31 compute-0 openstack_network_exporter[205945]: ERROR   22:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:43:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:43:31 compute-0 openstack_network_exporter[205945]: ERROR   22:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:43:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:43:34 compute-0 nova_compute[189608]: 2025-11-24 22:43:34.152 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:34 compute-0 nova_compute[189608]: 2025-11-24 22:43:34.572 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:35 compute-0 podman[259187]: 2025-11-24 22:43:35.545561906 +0000 UTC m=+0.104781850 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 22:43:35 compute-0 podman[259188]: 2025-11-24 22:43:35.57679801 +0000 UTC m=+0.116288178 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Nov 24 22:43:35 compute-0 sshd-session[259185]: Invalid user onlime_r from 185.217.1.246 port 43294
Nov 24 22:43:36 compute-0 sshd-session[259185]: Disconnecting invalid user onlime_r 185.217.1.246 port 43294: Change of username or service not allowed: (onlime_r,ssh-connection) -> (Test,ssh-connection) [preauth]
Nov 24 22:43:39 compute-0 nova_compute[189608]: 2025-11-24 22:43:39.157 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:39 compute-0 nova_compute[189608]: 2025-11-24 22:43:39.574 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:40 compute-0 sshd-session[259225]: Invalid user Test from 185.217.1.246 port 59296
Nov 24 22:43:41 compute-0 nova_compute[189608]: 2025-11-24 22:43:41.790 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:43:41 compute-0 nova_compute[189608]: 2025-11-24 22:43:41.816 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:43:41 compute-0 nova_compute[189608]: 2025-11-24 22:43:41.850 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:41 compute-0 nova_compute[189608]: 2025-11-24 22:43:41.851 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:41 compute-0 nova_compute[189608]: 2025-11-24 22:43:41.852 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:41 compute-0 nova_compute[189608]: 2025-11-24 22:43:41.853 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:43:42 compute-0 nova_compute[189608]: 2025-11-24 22:43:42.256 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:43:42 compute-0 nova_compute[189608]: 2025-11-24 22:43:42.258 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5317MB free_disk=72.12620162963867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:43:42 compute-0 nova_compute[189608]: 2025-11-24 22:43:42.259 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:42 compute-0 nova_compute[189608]: 2025-11-24 22:43:42.259 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:42 compute-0 nova_compute[189608]: 2025-11-24 22:43:42.343 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:43:42 compute-0 nova_compute[189608]: 2025-11-24 22:43:42.344 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:43:42 compute-0 nova_compute[189608]: 2025-11-24 22:43:42.380 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:43:42 compute-0 nova_compute[189608]: 2025-11-24 22:43:42.395 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:43:42 compute-0 nova_compute[189608]: 2025-11-24 22:43:42.417 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:43:42 compute-0 nova_compute[189608]: 2025-11-24 22:43:42.418 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:42 compute-0 nova_compute[189608]: 2025-11-24 22:43:42.602 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:43 compute-0 nova_compute[189608]: 2025-11-24 22:43:43.394 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:43:43 compute-0 nova_compute[189608]: 2025-11-24 22:43:43.395 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:43:43 compute-0 nova_compute[189608]: 2025-11-24 22:43:43.422 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:43:44 compute-0 nova_compute[189608]: 2025-11-24 22:43:44.125 189613 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764024209.1246185, 715e08a7-7174-4e14-a83d-67aab18333d8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 22:43:44 compute-0 nova_compute[189608]: 2025-11-24 22:43:44.126 189613 INFO nova.compute.manager [-] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] VM Stopped (Lifecycle Event)
Nov 24 22:43:44 compute-0 nova_compute[189608]: 2025-11-24 22:43:44.149 189613 DEBUG nova.compute.manager [None req-5368e3ec-c2bf-458e-ad25-3b0b5caf44be - - - - - -] [instance: 715e08a7-7174-4e14-a83d-67aab18333d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 22:43:44 compute-0 nova_compute[189608]: 2025-11-24 22:43:44.162 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:44 compute-0 nova_compute[189608]: 2025-11-24 22:43:44.579 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:44 compute-0 sshd-session[259225]: Disconnecting invalid user Test 185.217.1.246 port 59296: Change of username or service not allowed: (Test,ssh-connection) -> (prod,ssh-connection) [preauth]
Nov 24 22:43:45 compute-0 podman[259228]: 2025-11-24 22:43:45.574467478 +0000 UTC m=+0.121151510 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, name=ubi9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64)
Nov 24 22:43:45 compute-0 podman[259230]: 2025-11-24 22:43:45.577969266 +0000 UTC m=+0.126187837 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Nov 24 22:43:45 compute-0 podman[259229]: 2025-11-24 22:43:45.59121164 +0000 UTC m=+0.133956429 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, distribution-scope=public, managed_by=edpm_ansible, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git)
Nov 24 22:43:46 compute-0 podman[259286]: 2025-11-24 22:43:46.528281298 +0000 UTC m=+0.091869956 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 22:43:46 compute-0 podman[259288]: 2025-11-24 22:43:46.531893961 +0000 UTC m=+0.075238778 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 24 22:43:46 compute-0 podman[259287]: 2025-11-24 22:43:46.596175306 +0000 UTC m=+0.142851427 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 24 22:43:47 compute-0 nova_compute[189608]: 2025-11-24 22:43:47.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:43:49 compute-0 nova_compute[189608]: 2025-11-24 22:43:49.167 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:49 compute-0 nova_compute[189608]: 2025-11-24 22:43:49.581 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:50 compute-0 nova_compute[189608]: 2025-11-24 22:43:50.789 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:43:51 compute-0 sshd-session[259352]: Invalid user prod from 185.217.1.246 port 35239
Nov 24 22:43:53 compute-0 sshd-session[259352]: Disconnecting invalid user prod 185.217.1.246 port 35239: Change of username or service not allowed: (prod,ssh-connection) -> (qaz,ssh-connection) [preauth]
Nov 24 22:43:53 compute-0 nova_compute[189608]: 2025-11-24 22:43:53.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:43:53 compute-0 nova_compute[189608]: 2025-11-24 22:43:53.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:43:54 compute-0 nova_compute[189608]: 2025-11-24 22:43:54.171 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:54 compute-0 nova_compute[189608]: 2025-11-24 22:43:54.584 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:54.609 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:43:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:54.611 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:43:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:43:54.612 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:43:54 compute-0 nova_compute[189608]: 2025-11-24 22:43:54.792 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:43:54 compute-0 nova_compute[189608]: 2025-11-24 22:43:54.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:43:54 compute-0 nova_compute[189608]: 2025-11-24 22:43:54.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:43:57 compute-0 nova_compute[189608]: 2025-11-24 22:43:57.795 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:43:58 compute-0 sshd-session[259355]: Invalid user qaz from 185.217.1.246 port 61998
Nov 24 22:43:58 compute-0 sshd-session[259355]: Disconnecting invalid user qaz 185.217.1.246 port 61998: Change of username or service not allowed: (qaz,ssh-connection) -> (vodafone,ssh-connection) [preauth]
Nov 24 22:43:59 compute-0 nova_compute[189608]: 2025-11-24 22:43:59.175 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:59 compute-0 nova_compute[189608]: 2025-11-24 22:43:59.587 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:43:59 compute-0 podman[203795]: time="2025-11-24T22:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:43:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:43:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4340 "" "Go-http-client/1.1"
Nov 24 22:44:01 compute-0 openstack_network_exporter[205945]: ERROR   22:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:44:01 compute-0 openstack_network_exporter[205945]: ERROR   22:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:44:01 compute-0 openstack_network_exporter[205945]: ERROR   22:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:44:01 compute-0 openstack_network_exporter[205945]: ERROR   22:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:44:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:44:01 compute-0 openstack_network_exporter[205945]: ERROR   22:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:44:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:44:01 compute-0 podman[259359]: 2025-11-24 22:44:01.598089424 +0000 UTC m=+0.140161113 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:44:02 compute-0 sshd-session[259357]: Invalid user vodafone from 185.217.1.246 port 16692
Nov 24 22:44:04 compute-0 nova_compute[189608]: 2025-11-24 22:44:04.179 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:04 compute-0 sshd-session[259357]: Disconnecting invalid user vodafone 185.217.1.246 port 16692: Change of username or service not allowed: (vodafone,ssh-connection) -> (ubnt,ssh-connection) [preauth]
Nov 24 22:44:04 compute-0 nova_compute[189608]: 2025-11-24 22:44:04.590 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:06 compute-0 podman[259383]: 2025-11-24 22:44:06.543323213 +0000 UTC m=+0.099872217 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible)
Nov 24 22:44:06 compute-0 podman[259382]: 2025-11-24 22:44:06.573915977 +0000 UTC m=+0.121748429 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 22:44:09 compute-0 nova_compute[189608]: 2025-11-24 22:44:09.183 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:09 compute-0 nova_compute[189608]: 2025-11-24 22:44:09.592 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:09 compute-0 sshd-session[259422]: Invalid user ubnt from 185.217.1.246 port 48484
Nov 24 22:44:14 compute-0 nova_compute[189608]: 2025-11-24 22:44:14.186 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:14 compute-0 sshd-session[259422]: Disconnecting invalid user ubnt 185.217.1.246 port 48484: Change of username or service not allowed: (ubnt,ssh-connection) -> (Administrator,ssh-connection) [preauth]
Nov 24 22:44:14 compute-0 nova_compute[189608]: 2025-11-24 22:44:14.595 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:16 compute-0 podman[259425]: 2025-11-24 22:44:16.513427071 +0000 UTC m=+0.074130803 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, architecture=x86_64, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release=1214.1726694543)
Nov 24 22:44:16 compute-0 podman[259427]: 2025-11-24 22:44:16.526220981 +0000 UTC m=+0.078952844 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 22:44:16 compute-0 podman[259426]: 2025-11-24 22:44:16.584062025 +0000 UTC m=+0.127060775 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 24 22:44:16 compute-0 podman[259483]: 2025-11-24 22:44:16.667985702 +0000 UTC m=+0.077702084 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 22:44:16 compute-0 podman[259482]: 2025-11-24 22:44:16.67081326 +0000 UTC m=+0.089567084 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:44:16 compute-0 podman[259520]: 2025-11-24 22:44:16.786440147 +0000 UTC m=+0.114771401 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.638 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.638 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.638 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c58034050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f4c57fbbfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580340e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.640 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c59b99220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c580343b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.641 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbb710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fb9760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.642 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f4c57fbbf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f4c596bb680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.643 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f4c580340b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.644 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f4c57fba4e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.644 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f4c57fbb080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.644 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f4c57fbb1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.645 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f4c57fb9730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.645 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f4c57fbb200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.646 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f4c57fbb260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.646 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f4c57fbb2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.646 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f4c57fbb320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.647 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f4c57fbb380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.647 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f4c58034380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.647 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f4c57fbb740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.648 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f4c57fbb3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.648 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f4c57fbb9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.648 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f4c57fbb440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.649 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f4c57fbbc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.649 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f4c57fbbcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.650 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f4c57fbbd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.650 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f4c57fbbda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.650 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f4c57fbbe30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.651 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f4c57fbb680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.651 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f4c57fbbec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.651 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f4c57fbb6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.652 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f4c59b99130>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.652 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f4c57fbbf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f4c5814bb30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.652 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.653 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:17 compute-0 ceilometer_agent_compute[200333]: 2025-11-24 22:44:17.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 24 22:44:19 compute-0 ovn_controller[97889]: 2025-11-24T22:44:19Z|00203|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Nov 24 22:44:19 compute-0 nova_compute[189608]: 2025-11-24 22:44:19.190 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:19 compute-0 sshd-session[259424]: Invalid user Administrator from 185.217.1.246 port 14764
Nov 24 22:44:19 compute-0 nova_compute[189608]: 2025-11-24 22:44:19.597 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:20 compute-0 sshd-session[259424]: Disconnecting invalid user Administrator 185.217.1.246 port 14764: Change of username or service not allowed: (Administrator,ssh-connection) -> (squid,ssh-connection) [preauth]
Nov 24 22:44:23 compute-0 sshd-session[259552]: Invalid user sol from 45.148.10.240 port 46910
Nov 24 22:44:23 compute-0 sshd-session[259552]: Connection closed by invalid user sol 45.148.10.240 port 46910 [preauth]
Nov 24 22:44:24 compute-0 nova_compute[189608]: 2025-11-24 22:44:24.193 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:24 compute-0 nova_compute[189608]: 2025-11-24 22:44:24.602 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:24 compute-0 sshd-session[259550]: Invalid user squid from 185.217.1.246 port 44559
Nov 24 22:44:25 compute-0 sshd-session[259550]: Disconnecting invalid user squid 185.217.1.246 port 44559: Change of username or service not allowed: (squid,ssh-connection) -> (qemu,ssh-connection) [preauth]
Nov 24 22:44:28 compute-0 sshd-session[259554]: Disconnecting authenticating user qemu 185.217.1.246 port 52494: Change of username or service not allowed: (qemu,ssh-connection) -> (odoo16,ssh-connection) [preauth]
Nov 24 22:44:29 compute-0 nova_compute[189608]: 2025-11-24 22:44:29.197 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:29 compute-0 nova_compute[189608]: 2025-11-24 22:44:29.604 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:29 compute-0 podman[203795]: time="2025-11-24T22:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:44:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:44:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4347 "" "Go-http-client/1.1"
Nov 24 22:44:31 compute-0 openstack_network_exporter[205945]: ERROR   22:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:44:31 compute-0 openstack_network_exporter[205945]: ERROR   22:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:44:31 compute-0 openstack_network_exporter[205945]: ERROR   22:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:44:31 compute-0 openstack_network_exporter[205945]: ERROR   22:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:44:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:44:31 compute-0 openstack_network_exporter[205945]: ERROR   22:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:44:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:44:32 compute-0 systemd[1]: Starting dnf makecache...
Nov 24 22:44:32 compute-0 podman[259559]: 2025-11-24 22:44:32.55019409 +0000 UTC m=+0.093452176 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:44:32 compute-0 dnf[259560]: Metadata cache refreshed recently.
Nov 24 22:44:32 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 24 22:44:32 compute-0 systemd[1]: Finished dnf makecache.
Nov 24 22:44:33 compute-0 sshd-session[259557]: Invalid user odoo16 from 185.217.1.246 port 5661
Nov 24 22:44:33 compute-0 sshd-session[259557]: Disconnecting invalid user odoo16 185.217.1.246 port 5661: Change of username or service not allowed: (odoo16,ssh-connection) -> (client,ssh-connection) [preauth]
Nov 24 22:44:34 compute-0 nova_compute[189608]: 2025-11-24 22:44:34.201 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:34 compute-0 nova_compute[189608]: 2025-11-24 22:44:34.608 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:37 compute-0 podman[259585]: 2025-11-24 22:44:37.548448753 +0000 UTC m=+0.093158667 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 22:44:37 compute-0 podman[259586]: 2025-11-24 22:44:37.560675124 +0000 UTC m=+0.109578579 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible, tcib_managed=true)
Nov 24 22:44:37 compute-0 sshd-session[259583]: Invalid user client from 185.217.1.246 port 26505
Nov 24 22:44:38 compute-0 sshd-session[259583]: Disconnecting invalid user client 185.217.1.246 port 26505: Change of username or service not allowed: (client,ssh-connection) -> (kafka,ssh-connection) [preauth]
Nov 24 22:44:39 compute-0 nova_compute[189608]: 2025-11-24 22:44:39.207 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:39 compute-0 nova_compute[189608]: 2025-11-24 22:44:39.608 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:41 compute-0 sshd-session[259621]: Invalid user kafka from 185.217.1.246 port 42987
Nov 24 22:44:41 compute-0 sshd-session[259621]: Disconnecting invalid user kafka 185.217.1.246 port 42987: Change of username or service not allowed: (kafka,ssh-connection) -> (aaa,ssh-connection) [preauth]
Nov 24 22:44:42 compute-0 nova_compute[189608]: 2025-11-24 22:44:42.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:44:42 compute-0 nova_compute[189608]: 2025-11-24 22:44:42.795 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 22:44:42 compute-0 nova_compute[189608]: 2025-11-24 22:44:42.795 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 22:44:42 compute-0 nova_compute[189608]: 2025-11-24 22:44:42.812 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 22:44:43 compute-0 nova_compute[189608]: 2025-11-24 22:44:43.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:44:43 compute-0 nova_compute[189608]: 2025-11-24 22:44:43.826 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:44:43 compute-0 nova_compute[189608]: 2025-11-24 22:44:43.826 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:44:43 compute-0 nova_compute[189608]: 2025-11-24 22:44:43.827 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:44:43 compute-0 nova_compute[189608]: 2025-11-24 22:44:43.827 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 22:44:44 compute-0 nova_compute[189608]: 2025-11-24 22:44:44.171 189613 WARNING nova.virt.libvirt.driver [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 22:44:44 compute-0 nova_compute[189608]: 2025-11-24 22:44:44.173 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5310MB free_disk=72.12620162963867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 22:44:44 compute-0 nova_compute[189608]: 2025-11-24 22:44:44.173 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:44:44 compute-0 nova_compute[189608]: 2025-11-24 22:44:44.173 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:44:44 compute-0 nova_compute[189608]: 2025-11-24 22:44:44.212 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:44 compute-0 nova_compute[189608]: 2025-11-24 22:44:44.236 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 22:44:44 compute-0 nova_compute[189608]: 2025-11-24 22:44:44.236 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 22:44:44 compute-0 nova_compute[189608]: 2025-11-24 22:44:44.257 189613 DEBUG nova.compute.provider_tree [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed in ProviderTree for provider: 7680d048-14f1-46f8-a34d-a7eb32eb11df update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 22:44:44 compute-0 nova_compute[189608]: 2025-11-24 22:44:44.270 189613 DEBUG nova.scheduler.client.report [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Inventory has not changed for provider 7680d048-14f1-46f8-a34d-a7eb32eb11df based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 22:44:44 compute-0 nova_compute[189608]: 2025-11-24 22:44:44.271 189613 DEBUG nova.compute.resource_tracker [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 22:44:44 compute-0 nova_compute[189608]: 2025-11-24 22:44:44.271 189613 DEBUG oslo_concurrency.lockutils [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:44:44 compute-0 nova_compute[189608]: 2025-11-24 22:44:44.614 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:47 compute-0 podman[259625]: 2025-11-24 22:44:47.563276855 +0000 UTC m=+0.112365835 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, managed_by=edpm_ansible, version=9.4, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0)
Nov 24 22:44:47 compute-0 podman[259627]: 2025-11-24 22:44:47.573840286 +0000 UTC m=+0.109155196 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 22:44:47 compute-0 podman[259633]: 2025-11-24 22:44:47.590050031 +0000 UTC m=+0.106979598 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 24 22:44:47 compute-0 podman[259626]: 2025-11-24 22:44:47.595076978 +0000 UTC m=+0.129212482 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, managed_by=edpm_ansible, architecture=x86_64, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 24 22:44:47 compute-0 podman[259645]: 2025-11-24 22:44:47.61213835 +0000 UTC m=+0.119294582 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 22:44:47 compute-0 podman[259634]: 2025-11-24 22:44:47.63587397 +0000 UTC m=+0.146759018 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 22:44:48 compute-0 nova_compute[189608]: 2025-11-24 22:44:48.272 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:44:48 compute-0 sshd-session[259623]: Invalid user aaa from 185.217.1.246 port 2058
Nov 24 22:44:48 compute-0 sshd-session[259623]: Disconnecting invalid user aaa 185.217.1.246 port 2058: Change of username or service not allowed: (aaa,ssh-connection) -> (splunk,ssh-connection) [preauth]
Nov 24 22:44:49 compute-0 nova_compute[189608]: 2025-11-24 22:44:49.216 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:49 compute-0 nova_compute[189608]: 2025-11-24 22:44:49.616 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:51 compute-0 sshd-session[259741]: Invalid user splunk from 185.217.1.246 port 20226
Nov 24 22:44:51 compute-0 sshd-session[259741]: Disconnecting invalid user splunk 185.217.1.246 port 20226: Change of username or service not allowed: (splunk,ssh-connection) -> (aman,ssh-connection) [preauth]
Nov 24 22:44:52 compute-0 nova_compute[189608]: 2025-11-24 22:44:52.788 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:44:54 compute-0 nova_compute[189608]: 2025-11-24 22:44:54.220 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:44:54.610 106776 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 22:44:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:44:54.610 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 22:44:54 compute-0 ovn_metadata_agent[106771]: 2025-11-24 22:44:54.610 106776 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 22:44:54 compute-0 nova_compute[189608]: 2025-11-24 22:44:54.618 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:54 compute-0 nova_compute[189608]: 2025-11-24 22:44:54.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:44:54 compute-0 nova_compute[189608]: 2025-11-24 22:44:54.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:44:54 compute-0 nova_compute[189608]: 2025-11-24 22:44:54.794 189613 DEBUG nova.compute.manager [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 22:44:55 compute-0 nova_compute[189608]: 2025-11-24 22:44:55.794 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:44:56 compute-0 sshd-session[259744]: Invalid user aman from 185.217.1.246 port 40322
Nov 24 22:44:56 compute-0 nova_compute[189608]: 2025-11-24 22:44:56.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:44:56 compute-0 sshd-session[259744]: Disconnecting invalid user aman 185.217.1.246 port 40322: Change of username or service not allowed: (aman,ssh-connection) -> (nexus,ssh-connection) [preauth]
Nov 24 22:44:57 compute-0 nova_compute[189608]: 2025-11-24 22:44:57.793 189613 DEBUG oslo_service.periodic_task [None req-ef789299-a539-4e2e-9a25-a0d46929532c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 22:44:59 compute-0 nova_compute[189608]: 2025-11-24 22:44:59.223 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:59 compute-0 nova_compute[189608]: 2025-11-24 22:44:59.621 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:44:59 compute-0 podman[203795]: time="2025-11-24T22:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:44:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:44:59 compute-0 podman[203795]: @ - - [24/Nov/2025:22:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Nov 24 22:45:01 compute-0 openstack_network_exporter[205945]: ERROR   22:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:45:01 compute-0 openstack_network_exporter[205945]: ERROR   22:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:45:01 compute-0 openstack_network_exporter[205945]: ERROR   22:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:45:01 compute-0 openstack_network_exporter[205945]: ERROR   22:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:45:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:45:01 compute-0 openstack_network_exporter[205945]: ERROR   22:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:45:01 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:45:03 compute-0 podman[259748]: 2025-11-24 22:45:03.543922461 +0000 UTC m=+0.095744047 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 24 22:45:03 compute-0 sshd-session[259746]: Invalid user nexus from 185.217.1.246 port 57998
Nov 24 22:45:04 compute-0 nova_compute[189608]: 2025-11-24 22:45:04.228 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:04 compute-0 sshd-session[259746]: Disconnecting invalid user nexus 185.217.1.246 port 57998: Change of username or service not allowed: (nexus,ssh-connection) -> (storage,ssh-connection) [preauth]
Nov 24 22:45:04 compute-0 nova_compute[189608]: 2025-11-24 22:45:04.623 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:07 compute-0 sshd-session[259770]: Invalid user storage from 185.217.1.246 port 17959
Nov 24 22:45:07 compute-0 podman[259772]: 2025-11-24 22:45:07.92813964 +0000 UTC m=+0.124638229 container health_status 5e30414a0959af2ea10fb5a1899ed01aa5229bdcf123991b2e26be3a2d2d9cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 24 22:45:07 compute-0 podman[259773]: 2025-11-24 22:45:07.962862633 +0000 UTC m=+0.143762255 container health_status a59804e2d53e62c8685f5e29649928e3b4b5f93a63fc59dc61949cda3090811d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 24 22:45:09 compute-0 nova_compute[189608]: 2025-11-24 22:45:09.231 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:09 compute-0 sshd-session[259770]: Disconnecting invalid user storage 185.217.1.246 port 17959: Change of username or service not allowed: (storage,ssh-connection) -> (scsadmin,ssh-connection) [preauth]
Nov 24 22:45:09 compute-0 nova_compute[189608]: 2025-11-24 22:45:09.626 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:14 compute-0 nova_compute[189608]: 2025-11-24 22:45:14.235 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:14 compute-0 nova_compute[189608]: 2025-11-24 22:45:14.630 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:14 compute-0 sshd-session[259810]: Invalid user scsadmin from 185.217.1.246 port 40791
Nov 24 22:45:16 compute-0 sshd-session[259810]: Disconnecting invalid user scsadmin 185.217.1.246 port 40791: Change of username or service not allowed: (scsadmin,ssh-connection) -> (john,ssh-connection) [preauth]
Nov 24 22:45:16 compute-0 sshd-session[259812]: Accepted publickey for zuul from 192.168.122.10 port 50910 ssh2: ECDSA SHA256:EZFVJPqmz26FBCQurIACD0v24tZqtt5C19oQoS60ZHY
Nov 24 22:45:16 compute-0 systemd-logind[806]: New session 32 of user zuul.
Nov 24 22:45:16 compute-0 systemd[1]: Started Session 32 of User zuul.
Nov 24 22:45:16 compute-0 sshd-session[259812]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 22:45:17 compute-0 sudo[259816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 24 22:45:17 compute-0 sudo[259816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 22:45:18 compute-0 podman[259854]: 2025-11-24 22:45:18.222696849 +0000 UTC m=+0.097873394 container health_status c9f901a6e6555ced0e53da5df1a5b1787427585133c3cd1b115d2828d7e87ad0 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 24 22:45:18 compute-0 podman[259851]: 2025-11-24 22:45:18.222967838 +0000 UTC m=+0.113125210 container health_status 178df6a27828c3804fa298b67b7542d8c3aa0e55574460497a80a135215f07db (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-container, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 24 22:45:18 compute-0 podman[259852]: 2025-11-24 22:45:18.247280816 +0000 UTC m=+0.136074445 container health_status 366c567530fcd73f4431f163d4758b87d8606f98233191d28e598e1541ac526a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, name=ubi9-minimal)
Nov 24 22:45:18 compute-0 podman[259866]: 2025-11-24 22:45:18.256832074 +0000 UTC m=+0.115230265 container health_status fe13d6734cfaa98582d38f9a7444c5da98d82725a1ff4abdf78d7fe31a71c3c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 24 22:45:18 compute-0 podman[259853]: 2025-11-24 22:45:18.257109213 +0000 UTC m=+0.136827629 container health_status a0fbbff736dcbd289bac0d2cf98d1b77fe95060b818f2f01c83e460b7f4b4846 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 22:45:18 compute-0 podman[259855]: 2025-11-24 22:45:18.293396144 +0000 UTC m=+0.151152185 container health_status d073474acd17492c29dec380230f4cda7421df476ef2adf91c5cd1bf4e409f94 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 24 22:45:19 compute-0 nova_compute[189608]: 2025-11-24 22:45:19.238 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:19 compute-0 nova_compute[189608]: 2025-11-24 22:45:19.633 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:22 compute-0 ovs-vsctl[260103]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 24 22:45:23 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 259840 (sos)
Nov 24 22:45:23 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 24 22:45:23 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 24 22:45:23 compute-0 virtqemud[189136]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 24 22:45:23 compute-0 virtqemud[189136]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 24 22:45:23 compute-0 virtqemud[189136]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 22:45:23 compute-0 sshd-session[259850]: Invalid user john from 185.217.1.246 port 63457
Nov 24 22:45:23 compute-0 sshd-session[259850]: Disconnecting invalid user john 185.217.1.246 port 63457: Change of username or service not allowed: (john,ssh-connection) -> (odoo17,ssh-connection) [preauth]
Nov 24 22:45:24 compute-0 nova_compute[189608]: 2025-11-24 22:45:24.242 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:24 compute-0 nova_compute[189608]: 2025-11-24 22:45:24.634 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:24 compute-0 crontab[260524]: (root) LIST (root)
Nov 24 22:45:27 compute-0 sshd-session[260599]: Invalid user odoo17 from 185.217.1.246 port 33771
Nov 24 22:45:27 compute-0 systemd[1]: Starting Hostname Service...
Nov 24 22:45:27 compute-0 systemd[1]: Started Hostname Service.
Nov 24 22:45:27 compute-0 sshd-session[260599]: Disconnecting invalid user odoo17 185.217.1.246 port 33771: Change of username or service not allowed: (odoo17,ssh-connection) -> (stack,ssh-connection) [preauth]
Nov 24 22:45:29 compute-0 nova_compute[189608]: 2025-11-24 22:45:29.245 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:29 compute-0 nova_compute[189608]: 2025-11-24 22:45:29.636 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:29 compute-0 podman[203795]: time="2025-11-24T22:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 24 22:45:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 24 22:45:29 compute-0 podman[203795]: @ - - [24/Nov/2025:22:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4343 "" "Go-http-client/1.1"
Nov 24 22:45:31 compute-0 openstack_network_exporter[205945]: ERROR   22:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:45:31 compute-0 openstack_network_exporter[205945]: ERROR   22:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 24 22:45:31 compute-0 openstack_network_exporter[205945]: ERROR   22:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 24 22:45:31 compute-0 openstack_network_exporter[205945]: ERROR   22:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 24 22:45:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:45:31 compute-0 openstack_network_exporter[205945]: ERROR   22:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 24 22:45:31 compute-0 openstack_network_exporter[205945]: 
Nov 24 22:45:33 compute-0 sshd-session[260755]: Invalid user stack from 185.217.1.246 port 41037
Nov 24 22:45:33 compute-0 podman[261267]: 2025-11-24 22:45:33.79539488 +0000 UTC m=+0.081707749 container health_status 9d00b43530e24fb4754a778e3ce99618f7b6fce25695244b1547996343816b27 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 24 22:45:34 compute-0 sshd-session[260755]: Disconnecting invalid user stack 185.217.1.246 port 41037: Change of username or service not allowed: (stack,ssh-connection) -> (array,ssh-connection) [preauth]
Nov 24 22:45:34 compute-0 nova_compute[189608]: 2025-11-24 22:45:34.248 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:34 compute-0 nova_compute[189608]: 2025-11-24 22:45:34.638 189613 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 22:45:35 compute-0 sshd-session[260581]: error: kex_exchange_identification: read: Connection timed out
Nov 24 22:45:35 compute-0 sshd-session[260581]: banner exchange: Connection from 58.20.201.4 port 51866: Connection timed out
Nov 24 22:45:36 compute-0 ovs-appctl[261908]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
